What Are SQL Server Logins and Users

You’re a data professional learning about managing SQL Server and you’ve been asked to grant permissions for SQL Server to an individual or a group of individuals. What do you need to understand in order to accomplish this? I’ll be your guide to getting started with handling access to SQL Server.

Let’s start with an analogy.

SQL Server Logins and users

Keys and fobs are common place these days for vehicles. Sometimes if you have a fob it ties into the ignition allowing for push button start so no key is needed for that. That’s not always the case though. Many people still have key and fob combos on their key ring. For our purposes, we’re going to assume that is the case.

Your fob let’s you access your vehicle’s doors. The fob gives you what I’ll call an initial, top layer of access to your vehicle. With it you can unlock the doors so you can get inside. You can now sit in the seats, open the glove compartment and access other surface level features of the car, but you can’t do much else.

Your key to the ignition lets you start the car, giving you access to the engine, radio, power seats and other features in the vehicle.  With the key in the ignition, you can now take your car for a spin.

What is a SQL Server login?

A SQL Server login is much like the fob to your car. The login, by itself with no other permissions having been applied, gives the holder of the login access to the initial, surface layer parts of SQL Server. The login allows for connecting to an installation of SQL Server on a computer. A login can be a Windows authenticated account or a SQL Login. One uses the Windows operating system authentication method and the other uses just data within SQL Server itself to authenticate or verify the ability to connect to a SQL server instance.

SQL Server Logins

SQL Server Fixed Roles

Roles are sets of permissions that can be assigned to a security object, like a SQL Server login. These permission sets grant certain abilities to the logins that have been assigned those roles.

  • bulkadmin: Allows for running the BULK INSERT statement
  • dbcreator: Allows for creating, altering, dropping or restoring a database. Databases created by logins in this role are owned by that login and essentially have db_owner rights in the database.
  • diskadmin: Allows for the management of backup devices
  • processadmin: Allows for use of the KILL command to end session connections
  • public: Allows for access to certain metadata system catalog views and is granted to all logins by default.As a best practice, you should never change the permissions applied to the public server role because then every login connecting will have those permissions.
  • securityadmin: Allows for the management of access to the SQL Server. It manages logins and associated permissions such as grant, deny and revoke.
  • serveradmin: Allows for changing server-wide options, can shut down the server, clear the SQL Server procedure cache, as well as a few other things.
  • setupadmin: Allows for adding and removing Linked Servers using T-SQL
  • sysadmin: Allows for full control of the SQL Server. Logins in this role can do anything.

What is a database user?

A database user object is much like the key to your car. The database user (the key) is a database level authentication object that is typically associated with a Server level login (the fob). This user object is what provides for connecting to one or more databases.

SQL Server Database User

SQL Server Database Fixed Roles

  • db_accessadmin: Allows for adding or removing access to a database
  • db_backupoperator: Allows for backing up the database
  • db_datareader: Allows for reading the data in a database
  • db_datawriter: Allows for using insert, update or delete in a database
  • db_ddladmin: Allows for creating, altering or dropping (deleting) database objects
  • db_denydatareader: Denies the ability to read (SELECT) data.
  • db_denydatawrtier: Denies the ability to write (add, update or delete) data.
  • db_owner: Allows for full control of all database objects and data.
  • db_securityadmin: Allows for managing access to a database. Has control over database users and their permissions.
  • public: A role granted to al users in a database by default. It is best practice not to alter the permissions of this role.

 

The login (fob) allows for connecting to SQL Server and in turn is almost always associated with a database user (the key). That user allows connection to one or more databases. The server roles grant permissions for group members to do things with the SQL Server installation. The database roles grant permissions to members to do things with one or more databases.

Using Windows Active Directory to Manage SQL Server Access

To make managing SQL Server easier over the long run, whenever possible, it is best to use Windows Active Directory groups and appropriate server and database fixed roles. For example, if you need to grant read only access to the accounting database for the accounting department, or even just a couple of people in Accounting, then either ask the SysAdmins about Windows groups for the Accounting department and who is in those roles, or get that information from Active Directory Users and Groups yourself. Now, add windows groups to SQL Server and assign permissions. For more on that see this post about the topic.

 

Next Steps to Take

  1. See this for MS Docs on how to create a login.
  2. See this for MS Docs on how to create a database user.
  3. If you have questions or comments, leave me a comment on this post, or message me on Twitter

Building a SQL environment inventory with the MAP Toolkit

 

The MAP Toolkit, also called the Microsoft Assessment and Planning Toolkit, is a standard way to obtain information about your Microsoft environment. This includes your SQL Server. The tool is so common that a link to it is included in the SQL Server media. This tool will scan your network and provide a fairly comprehensive report on SQL Servers it finds. The direct link to the MAP Toolkit download is located here. Be sure to download all the files from that page.

MAP Toolkit
MAP Toolkit Link In SQL Server Installer

As a DBA new on the job, the MAP Toolkit can be a lifesaver. You have no idea how many SQL Servers there are or what versions and editions exist, or where they are installed. Sure, you asked these questions during the interview, but unless you were talking to a DBA team who has been there awhile, you likely didn’t get correct answers. You see, most places don’t really know how many SQL Servers they have. I’ll be your guide to getting that first SQL Server environment inventory up and running so you know what you’ve inherited.

Why Use the MAP Toolkit?

First, using this MAP Toolkit will automate the process of gathering important SQL Server environment information for data professionals. Even if you only have a a few SQL Servers, using this tool will save you time versus manually connecting to each SQL instance to get the same data.

Second, there are almost always more SQL Servers in an environment than people realize and this application will locate them for you.

Third, the MAP Toolkit will help you find computers where Standard or Enterprise Edition is installed other than production. You have to have licensing for Standard or Enterprise edition when installed on people’s laptops or PCs. I’ve found plenty of employee computers running Enterprise edition SQL Server! Also, non-prod servers should not have Standard or Enterprise edition installed for the same reason. In non-prod, use Developer edition, as it’s free.

Getting Started with the MAP Toolkit

A full tutorial on using the MAP Toolkit is available here on Microsoft Learn. MAP Toolkit installation is fairly straight forward but is also included in the  tutorial from Microsoft that I linked to above. You can install it on your own desktop and then scan Active Directory for SQL Servers. Please be sure to notify your System Administrators BEFORE you run the scan as it will set off intrusion detection alarms.

Here is a screen capture of what some of the inventory methods are.

MAP Toolkit scan options
MAP Toolkit – Scan options

 

Generating Excel Reports from the MAP Toolkit

Once SQL Server scan is complete, you will see a screen like this one.

MAP Toolkit - Post Collection Screen
MAP Toolkit – Post Collection Screen

Click anywhere in the box under “SQL Server Discovery”, then you will see a screen like this. Click each excel icon in the Options section to generate the Excel for what was captured.

MAP Toolkit - Generate Excel Output
MAP Toolkit – Generate Excel Output

 

The SqlServerAssessment file will have the list of the SQL Server instances it found, the computers those instances are on the product version name, Service Pack number and edition. The data will also contain the OS version of the computer and some general hardware information. Importing at least these columns into a SQL Server database using the Import/Export Wizard in SQL Server is your first step to understand what your SQL Server environment looks like.

If you want to see what the assessment files might look like before you run your first scan, be sure to extract the file “MAPSample_Documents.zip” and locate the SQL Server samples. These will give you an idea of the kinds of information you can expect from the tool.

Why Import This Data to a SQL Server?

Importing this data to SQL Server affords you at least four advantages to keeping it just in an Excel file.

  1. The data recovery abilities of a SQL Server database are superior as opposed to trying to recover a file that has been deleted, which is not always possible.
  2. The data is accessible to other data professionals who have proper permissions on the SQL Server. This makes it easy for other people on the data team to review, analyze and maintain the data.
  3. Having the data in a database also allows you to query the data for a variety of things, like how many of each product name(SQL Server 2005, 2008…2017, 2019), that you have.
  4. As you maintain this data over time. You can see how the environment changes as you upgrade older SQL Server products to newer ones, for example. This can be great information to have when it comes to updating your resume or when approaching review time.

There is certainly other useful information in the various tabs of both spreadsheets created by the reports. I strongly encourage you to explore that data to learn what you can about your SQL Server environment.

 

Next Steps To Take

  1. Now that you seen what the MAP Toolkit can do for you, download and run your first scan. Then import it into SQL Server. You’ll likely be surprised at what it turns up.
  2. Analyze the scan results to see how many SQL Servers you need to dump upgrade to get onto a supported version of SQL Server. Upgrading to a modern version of SQL Server can be a key goal for you as a DBA.
  3. If you have questions or comments, leave me a comment on this post, or message me on Twitter

Error Messages 8114 or 22122 When Performing Change Tracking Cleanup

 

You’re a data professional and you’re trying to keep up with patching a wide range of SQL Server versions and editions. How do you know what’s in the CU and whether you should apply it or not? My favorite way to read up on CUs is to go to SQLServerUpdates and click around there.  It will take you to the latest CU pages as well as give you a way to see previous CUs that are available.

While doing this recently, I discovered this for CU 26 on SQL Server 2017.

Known Issues SQL Server 2017 CU 26

How to check If Change Tracking Is Enabled

SQL Server 2017 CU 26 has a known issue with change tracking. Before you apply that CU, be sure to check that you’re not using Change Tracking somewhere. Well, how would you do that if you have a larger environment and don’t immediately know if you’re using Change Tracking somewhere?

You could connect to your Central Management Server in SQL Server Management Studio and then run the below to return the SQL Server instance and database where the Change Tracking Feature is enabled. In this case, we don’t care what tables are involved in the Change Tracking, so there is no need to look at sys.change_tracking_tables. We only need to know that Change Tracking is enabled. This is enough to know that we shouldn’t update to SQL Server 2017 CU26 for that SQL instance.

SELECT db_name(database_id) AS DBname
FROM sys.change_tracking_databases;

You might see this:

Msg 8114, Level 16, State 1, Procedure sp_add_ct_history, Line LineNumber

Error converting data type numeric to int.

Or this error:

“DateTime spidNum     Error: 22122, Severity: 16, State: 1.

DateTime spidNum     Change Tracking autocleanup failed on side table of “table_name”. If the failure persists, use sp_flush_CT_internal_table_on_demand to clean up expired records from its side table.

The solution is to either stay on CU25 or lower, or go up to CU 27 for SQL Server 2017. Microsoft specifically says on this page that CU27 has a fix. You could also uninstall CU26 for SQL Server 2017. You might need to do this if you’re already on this version and can’t update to CU27 for some reason. To do this you can go to Control Panel > Programs and Features > View Installed updates

If you’re on SQL Server 2016 SP3 and experiencing this problem, there is a hotfix here listed as KB5006943.

 

 

Four Areas to Focus on When You Start A New DBA Role

 

You’ve just been hired into a DBA role at a new company, or you’ve been given the DBA keys at your current company. Maybe you’re a SysAdmin and your boss has informed you that you are now supposed to manage the SQL Servers as well as everything else on your plate. In any of these situations, you may have some confidence in your skills, but especially in the case of being a new hire, you have absolutely no true idea of what you’re walking into.

In these scenarios, where do you start? Start with these four areas.

  1. Backups
  2. SQL Agent Job Notifications
  3. Building an environment inventory
  4. Security

 

1. SQL Server backups: You have to be able to recover the data.

My experience has been that many companies aren’t doing an adequate job of managing the backup routines for their databases. I’ve been in companies where two or even three products were running backups on the same SQL Servers. I’ve been in situations where important databases weren’t backed up frequently enough and with the right set of backup types to recover appropriately, which would result in losing more data than the management was comfortable with. I’ve also seen scenarios where there were no backups at all.

So, what do you do to tackle the issue of SQL Server backups in your environment? First, connect to your SQL Servers and run the below query. This will give you the database name and the date of the last full backup if the last full backup was over 7 days ago . If the database has never been backed up, then you’ll get that information as well. Any results from this query should be investigated immediately. This query was adapted from sp_Blitz with some column names changed to make it more clear what it does.

SELECT  d.[name] AS DatabaseName ,
COALESCE(CAST(MAX(b.backup_finish_date) AS VARCHAR(25)),'No Backup Ever Made') AS LastFullBackup
FROM master.sys.databases AS D
LEFT OUTER JOIN msdb.dbo.backupset AS B ON D.name COLLATE SQL_Latin1_General_CP1_CI_AS = B.database_name COLLATE SQL_Latin1_General_CP1_CI_AS

AND B.type = 'D' /*Full backup*/
AND B.server_name = SERVERPROPERTY('ServerName') /*Backupset ran on server you're currently connected to. */

WHERE D.[name]<> 'tempdb'  /* Eliminate TempDB. No need to back that up */
AND D.state_desc NOT IN ('RESTORING', 'OFFLINE', 'OFFLINE_SECONDARY') /* Exclude databases that are offline or involved in log shipping, for example */
AND D.is_in_standby = 0 /* Exclude databases in stand-by state as part of log shipping*/
AND D.source_database_id IS NULL /* Excludes database snapshots */

GROUP BY d.name
HAVING MAX(B.backup_finish_date) <= GETDATE()-7 /*Full backup older than 7 days ago.*/
OR MAX(B.backup_finish_date) IS NULL;										    

 

This one will show you any databases in the Full recovery model that haven’t had a transaction log backup in over 24 hours. This query was adapted from sp_Blitz with some column names changed to make it more clear what it does.

SELECT  d.[name] AS DatabaseName ,
COALESCE(CAST(MAX(b.backup_finish_date) AS VARCHAR(25)),'No Backup Ever Made') AS LastLogBackup
FROM master.sys.databases AS D
LEFT OUTER JOIN msdb.dbo.backupset AS B ON D.name COLLATE SQL_Latin1_General_CP1_CI_AS = B.database_name COLLATE SQL_Latin1_General_CP1_CI_AS

AND B.type = 'L' /*Log backup*/
AND B.server_name = SERVERPROPERTY('ServerName') /*Backupset ran on server you're currnetly connected to. */

WHERE D.[name]<> 'tempdb'  /* Eliminate TempDB. No need to back that up */
AND D.state_desc NOT IN ('RESTORING', 'OFFLINE', 'OFFLINE_SECONDARY') /* Exclude databases that are offline or involved in log shipping, for example */
AND D.is_in_standby = 0 /* Database is read-only to restore a log backup as in Log Shipping. */
AND D.source_database_id IS NULL /* Excludes database snapshots */
AND D.recovery_model_desc = 'Full'
GROUP BY d.name
HAVING MAX(B.backup_finish_date) <= GETDATE()-1 /*Log backup older than 1 day ago.*/
OR MAX(B.backup_finish_date) IS NULL;										    

 

Review the SQL Server to see if there are any native backup methods in place, such as a Maintenance Plan or Ola Hallengren scripts.  Each of these backup methods will produce one or more SQL Server Agent jobs. Check to see if the jobs associated with those options are enabled.

If you don’t see either of those options set up, your company may be using a 3rd party backup software. These are typically, but not always, deployed by System Administrators. Check with them to see what 3rd party software is in place for backups and let them know your findings for the server in question. this could be a situation where this SQL Server was just not enrolled in the 3rd party software and so is not being backed up.

If none of these things turn up a backup solution, then get one in place ASAP. See this post for options on how to create backups.

2. SQL Agent Job Notifications: If a job is worth creating, it’s worth knowing if it failed.

This is in position #2 because it’s been an easy win in every job I’ve taken. In every new role I’ve had, I have discovered many SQL Agent jobs with no notifications or notifications set up to wrong people. If a job is worth creating, it’s worth knowing if it failed.

As you work through finding failed jobs that never notified anyone, you may find that you’re fixing important agent jobs that should have been working. This process fixes business and data related processes and also helps you find stake-holders for data processes. When jobs fail and you investigate, you often need to know who is impacted by the job failure. This further leads to identifying who should be notified if it fails. This process then, actually helps connect you to key players in the organization very early. Those people on the business side almost always appreciate your efforts to make sure that data processes that affect them are working properly. This gives you allies as you move forward in your role.

So how do you proceed if you find jobs that have no notification set up if they should fail?

  • First, locate jobs that are failing. I have a post that will show you two ways to do that. You can either look at the SQL Agent Job Activity Monitor on a SQL instance or query the msdb database.
  • Second, locate enabled jobs that have no email operators assigned. These will be jobs that no one will know whether they are working or not. This post will tell you about how to find these jobs.
  • Third, work through a process to make sure jobs with no notification set up will have an email operator assigned to them. This will ensure the right “someone” will know if the job fails.

3. Build an environment Inventory: You can’t manage what you don’t know about.

The process of gathering information about your environment tells you about the make up of versions and editions in your environment. This, in turn, leads to the answer to the question, “What SQL Server’s are out of support and likely need to be upgraded?”

“How many SQL Servers are there at company X?”

Sure, the hiring manager told you the company only has a dozen SQL Servers, but I’ve often found that hiring managers often don’t know the real answer to the question, “So, how many SQL Servers do you have at company x?” That’s a question you should always ask when interviewing by the way. When you hear the answer, I’d encourage you to add at least 50% to that answer. At one job, I was told there were about a dozen SQL Servers. Less than 6 months in I had located about 50 SQL Servers, and that was before I ran the MAP Toolkit.

If you are not familiar with the MAP Toolkit, when you run the utility it does a search of your network or Active Directory to find SQL Servers. You will want to ensure that your security team and/or System Administrators know you are going to run this utility. This is because it will likely set off security software or even be blocked, in some cases.

The tool produces an Excel format list of plenty of information for you to know and this is the simplest way to start creating a SQL Server inventory. This scan is likely to find a lot of SQL Servers that no one seems to remember existing. Managing all the SQL Servers within your purview is important. You can’t do that if you don’t know about them.

You will also probably find individual employees who have installed SQL Server Standard or even Enterprise Edition on their local PCs. That’s a licensing no-no. This situation gives you the chance to help your company avoid licensing entanglements with Microsoft by addressing these scenarios.

If you find that you aren’t allowed to use the MAP Toolkit, as you hear about other SQL Servers that exist in your environment, or as you discover them for yourself as you look through Active Directory Users and Computers, make a Central Management Server and add SQL Servers to it. Ensure you have access to those SQL Servers and then query them to find out all you can about them. Review the information available to you from SERVERPROPERTY and build your own queries to discover information about the company SQL Servers. Here is an example query from the MS Docs link above. As you discover information about the company SQL Servers, create your own spreadsheet of information, or create a database and tables to hold the information you find.

SELECT  
  SERVERPROPERTY('MachineName') AS ComputerName,
  SERVERPROPERTY('ServerName') AS InstanceName,  
  SERVERPROPERTY('Edition') AS Edition,
  SERVERPROPERTY('ProductVersion') AS ProductVersion,  
  SERVERPROPERTY('ProductLevel') AS ProductLevel;  
GO

 

4. Security: Who has access to SQL Server and what type of access do they have?

I generally focus on this one after the others because security is often a political subject within an organization. You don’t want to become embroiled in power struggles right away by trying to take away someone’s access, because you will lose. Additionally, if you’ve done the previous work mentioned, you have likely built some good business relationships and a few allies to help you have the discussion about SQL Server access and security.

As a DBA, you and the business need to know who has SysAdmin level access. The access that they possess allows their credentials to make any change they want to. In truth, it’s often not so much about trusting the person who has the access as it is realizing that every person who has SA permission is a person whom a hacker could exploit to gain access to the company’s data. You will want to make the business aware of how many credentials have this level of permission and whose credentials they are.

How do you find who has SysAdmin level access on your SQL Server? Here is a query for that from MS Docs. I have added a WHERE clause to exclude the commonly found SQL Server accounts that start with NT SERVICE.

 

SELECT	roles.principal_id AS RolePrincipalID,	
roles.name AS RolePrincipalName,	server_role_members.member_principal_id	AS MemberPrincipalID,	
members.name AS MemberPrincipalName

FROM sys.server_role_members AS server_role_members
INNER JOIN sys.server_principals AS roles
    ON server_role_members.role_principal_id = roles.principal_id
INNER JOIN sys.server_principals AS members 
    ON server_role_members.member_principal_id = members.principal_id
WHERE members.name NOT LIKE 'NT SERVICE%';

This query shows you what credentials have Server level SA access. You also will want to know what accounts have the db_owner role at the database level.

I would strongly encourage you to review the permissions scripts written and maintained by Kenneth Fisher. These will provide a wealth of information about your server level logins and your database level users.

 

Next Steps To Take

  1. Check out the First Responder Kit on Github.
  2. If you have a comment or question about this post specifically, leave a comment and I’ll get back to you.
  3. If you would like help with something else related to SQL Server, reach out to me on Twitter, and I’ll be glad to offer assistance.

 

Three Keys to SQL Server Performance

 

Everyone wants good performance from their database systems. Some even expect and need a high performing Ferrari all the time. How is this achieved though? What do you need to understand about SQL Server specifically in order to make your company’s applications hum along like a well tuned car? We will look at three keys to SQL Server performance.

 

SQL Server Speed
SQL Server Performance

Here’s the short list of performance keys.

1. SQL Server must be able to cache sufficient data in memory

2. SQL Server must be able to retrieve data from disk efficiently

3. You must write “good” T-SQL

 

SQL Server Memory

 

1. SQL Server must be able to cache sufficient data in memory

SQL Server does not use memory like most applications.  People unfamiliar with SQL Server’s use of memory are surprised to see SQL Server using 80% or more of a server’s RAM. “I have 128 GB  of RAM on this machine why is SQL Server set to consume 115 GB of that?!” Then they decide to change Max Server Memory down to like 64 GB or less. Suddenly they might find that disk IO shoots up. You will also likely see that execution plans are rapidly aging out of the plan cache, which will burn up CPU with new compiles for query plans and potentially lead to parameter sniffing issues. When all this comes into play people start complaining about applications not working as well. They start saying the magical phrase, “It’s slow.”

When a query is issued and the  data requested is not in memory, SQL Server must use CPU and disk IO to get the data into memory first. While it does this a PAGEIOLATCH_* wait is registered in SQL Server because the current query is waiting on the data to be retrieved. This wait causes the query to be put on hold.  Reading data off the disk is always, always going to be slower than working with data that’s already in memory.

So, how do you determine what is a sufficient amount of RAM for your SQL Server? The short answer is that I would encourage you to look at  my post here and use the DBATools cdmlet Set-DBAMaxMemory. The post will help you understand more about what the Max Server memory setting does and the PowerShell cmdlet will recommend a good starting place for setting Max Server Memory. Here are some examples from the documentation from DBATools and from the Help commands available in PowerShell.

<# If you have a Central Management Server for your SQL environment, consider using this command to loop through all the SQL Servers and set the Max Server Memory where it is set to something larger than the total amount of RAM assigned to the server. #> 

Get-DbaRegServer -SqlInstance sqlserver | Test-DbaMaxMemory | Where-Object { $_.MaxValue -gt $_.Total } | Set-DbaMaxMemory 

<# If you have a Central Management Server for your SQL environment, consider using this command to loop through all the SQL Servers and set the Max Server Memory to this accepted formula created by a SQL Server expert. #> 

Get-DbaRegServer -SqlInstance sqlserver | Test-DbaMaxMemory | Set-DbaMaxMemory 

<# If you don't have a registered server then just use the below #> 

Test-DbaMaxMemory -SQLinstance SQLServerInstanceNameHere | Set-DbaMaxMemory

The longer answer is this. Take a look at three metrics in the SQL Server:BufferManager category in Performance Monitor.  First, if you observe Page Life Expectancy over time and the value has long stretches where it plummets and stays low without recovering, then you have some heavy hitting queries running that could likely benefit from having more RAM available. This will likely be accompanied by SQL Server wait stats showing a high average wait for PAGEIOLATCH_* type waits. This is SQL Server reading pages from disk because the data pages it needed were not in memory.

Second, review the Perfmon counter Lazy Writes/Sec. If SQL Server is having to free up memory in between checkpoints, this value will be above zero. If it is regularly above zero, then SQL Server is regularly under memory pressure and is having to write data pages in memory back to the disk so that it can load different data pages to satisfy queries.

Third, look at Free list Stalls/sec. The value of this Performance Monitor counter indicates the number of requests per second that had to wait for a free data page in memory. If Page Life Expectancy is often low for long stretches and both Lazy Writes/sec and Free List Stalls/sec are greater than zero, then you need to either adjust Max Server Memory up (as long as you don’t go too high based on the above information), add memory or, take a hard look at your indexes and queries involved when these PerfMon metrics are out of balance.

SQL Server Indexing and Disk Performance

 

2. SQL Server Must Be Able to Retrieve Data From Disk Efficiently

Look at your SQL Server wait stats information and if you see PAGEIOLATCH_* very prominent then there could be a good chance that the indexes in your database need attention or those long IO waits could mean a problem with the disk subsystem.

PAGEIOLATCH_* type waits are all wait types related to reading 8KB data pages from the disk into memory. Inefficient indexes could be making these reads longer because SQL Server can’t quickly get the data it needs from the existing indexes. This can happen over time as query patterns and data patterns change. For example, your company might introduce a new category of products Suddenly people are querying that category and its associated products far more and older, mainstay products less. The distribution of that data may affect the execution plan generated.

The company may have re-factored an existing application into new tables and missed indexing the Category column. Now when people are searching for things like “Bike Accessories” there is no supporting index. This results in long table scans of millions of rows.

As a start to determine if there is inefficient indexing, run and save the output of sp_Blitzindex to examine your tables and indexes. Review its recommendations and make adjustments. Then re-measure index usage with sp_BlitzIndex. Some time after a restart of SQL Server re-run sp_BlitzIndex and compare the output to the previously saved run looking to see if SQL Server is using the adjusted indexes.

To review whether you have a disk IO subsystem issue, look at the DMV called sys.dm_io_virtual_file_stats. You can also review the SQL Server Error Log looking for messages indicating IO that took longer than 15 seconds. These could be an indication of an IO subsystem issue. Review this article and this article on these topics.

Both of these articles provide information on understanding the DMV and the error message. There is also information about Performance Monitor counters to use to measure potential problems.

If you are in the cloud, such as AWS, be sure to review settings for ebs and fsx storage to ensure that the IOPs and throughput are set up appropriately. Also, be sure to take into consideration how the AWS ec2 instance type might be throttling your IO and throughput capabilities. Just because your storage is set up with a certain IOPs and throughput doesn’t mean that the ec2 instance can support the storage settings.

T-SQL Anti-Patterns to Avoid

 

3. You Must Write “Good” T-SQL

Poorly written T-SQL can cause poor application and SQL Server performance. Here is a brief list of things to avoid.

A. Writing T-SQL that makes it non-SARGable. This will cause SQL Server to have to scan entire tables instead of using an existing index. For details and examples see this, this and this. there is no need for me to explain this in depth, as it has been written about quite frequently.

B. Overuse of cursors. SQL is a set based language so making it do operations row by row, is far less efficient than using set-based logic.  Take a look at this post and this post.

C. Overuse of SELECT *. Some tables are very wide and have millions or even billions of rows. If you don’t truly need every column, then do SQL Server and your application a favor and only return the columns that are actually needed!

D. Be careful with scalar user defined functions. This goes back to the idea of SQL being a set-based language. A scalar udf returns a single value for each value passed to it. when you use this sort of logic and pass into it large numbers of rows, then each row is processed one at a time inside the function to return a value. Also, SQL Server doesn’t do a good job of “seeing inside” Scalar UDFs and show you that one is present in a query plan. Asa result, you might not see this sort of thing if you’re looking at a query plan. Additionally, scalar UDFs kill SQL Server’s ability to go parallel. For more, take a look at this, this , and this.

This list could be much, much longer. the point here is that how you write your T-SQL often has a direct impact on performance. If you write T-SQL for applications, I strongly encourage you to look at the blogs of folks like Erik Darling and Kendra Little as well as sites that have a large number of entries on T-SQL like this one.

Next Steps To Take

If you would like help with anything in this post, or with something else related to SQL Server, reach out to me here, on Twitter, or by email at Leem@leemarkum.com and I’ll be glad to offer assistance.

 

How I Became a Database Administrator

I recently saw a Tweet from Kendra Little where she asked the simple question, “What got you into databases?”  I’m intrigued by people’s stories of how they got into their current careers so I looked through the thread. There were lots of interesting paths shared and even a blog post was shared by someone who had recently written about how they got into a career involving databases. Kendra’s question and all of the interesting responses inspired me to write a post about my story.

 

It was a Dark and Dreary Night

 

Actually, it was February 2008. For a number of years I had been a team Supervisor at a physical security company called Interface Security; think ADT only much smaller. I was good at leading the team of 8-12 people as we used a critical piece of software to interact with customer accounts. I understood a lot about this software and was often the “lucky” person to be selected to do application failovers. This involved using a custom GUI from the vendor to shut down all the services of the application and start them back up, pointed to a new server. I haven’t worked there in almost 7 years, but I can still see this GUI in my mind.  Anyway, the woman who was the liaison between our company and the software maker was leaving her job to go work for the software maker, Bold Technologies.

I had reached about the top of the pay scale in my position and, at that point, there really was nowhere else for me to go in the company. The next few positions above me were taken and those people weren’t going to leave any time soon. The employee who was leaving asked me to apply for her job. At first I was adamant that I didn’t want to do that and that I didn’t know enough. She insisted that I did know enough and that I should apply. I gave it some more thought and decided to throw my hat in the ring. After a conversation with my boss two levels above me (Dan Reynolds) and about a month later, I had the job and the title of Manitou System Administrator. Manitou was the name of the software.

Interface Security was in the planning phases of a major version upgrade for this software. I took the job thinking I was just going to manage software. However, we were also going to be doing hardware upgrades for the servers this application was running on. My desire to understand everything even tangentially connected to the software got the better of me. We completed the software upgrade later that year and the server hardware upgrade as well.

Lee, Meet SQL Server

As part of that application upgrade process, I learned that there was this thing called SQL Server installed on the servers we were going to replace. I dove in head first to learn all I could about SQL Server. I learned that the process I was controlling with this company’s custom GUI was actually transactional replication under the covers. I loved learning about it and it was an important process to understand for my company and for my role.

About 18 months after starting the new role I began learning T-SQL using whatever free tutorials I could find on the internet. I started experimenting with SQL Server Reporting Services. I began to augment the application’s reporting capabilities with SSRS, at first just building the reports and making them available via the Report Manager and then later by scheduling them to be delivered to various people. In Jan 2012 I was officially moved into the I.T. department.

During that time I discovered the Microsoft Certified Master video training that Kimberly Tripp and Paul Randall recorded. I watched all the videos, some of them multiple times. There was so much I didn’t understand, but as I kept watching them and searching for information on the internet, the more the concepts made sense. I was by no means on my way to being an MCM for SQL Server, but I was learning rapidly.

Somewhere in the following years I started experimenting with what at the time were brand new features in Excel to help with reporting – Power Query and Power Pivot. I used those new tools to pull in and visualize data so that I had both SSRS and these new Excel tools at my disposal.

I also found Brentozar.com during those years. At the time, it was Brent, Kendra Little, and Jeremiah Peschka. Later, as I continued to learn from all of them, they added Jes Borland. This post was one of my first forays into performance tuning. I found it after the I.T. Director asked me to look into the performance problems that the financial system was having. It was also on SQL Server.

Earning a Microsoft Certification

In May of 2012 I completed a certificate in database technology from the Continuing Education Center University of Missouri at Saint Louis. The course I took there were concentrated two day courses on various parts of SQL Server. They advanced my career greatly. Somewhere along the way, I bought a book called SQL Server 2008 R2 Unleashed. I read that book, watched the aforementioned MCM videos and all the blog posts I could from BrentOzar.com.  I bought a Microsoft practice test and then a test from another vendor as well. At night around 9pm, after my three kids and my wife went to bed, I would study for about 2 hours a night, 5 nights a week. I studied probably 4-8 hours on the weekends too. This went on for months.

In May 2013 I got my first certification. It was for Microsoft SQL Server 2008 Implementation and Maintenance. I took the test on the eighth anniversary of my dad’s passing. I took it for me to advance my career, but I took the test on that day to attempt to honor him as well. The test was challenging, but I passed and I was ecstatic! I knew he would have been so proud.

A Change of Title

I held the Manitou System Administrator title until March 2014 and continued to grow more and more as a data professional. In March 2014 I got a title change to Database Administrator. I was elated! I had achieved a remarkable career change from a Call Center Supervisor role to a Database Administrator!

 

Acknowledgments

Wife and Kids –

My wife, Dacia, missed time with me on the weekends when I was studying for my first certification to help me break into the world of database administration. She handled the kids while I was sequestered away in my basement office studying like crazy. My kids, of course, missed time with me because of that too. I missed being with them, but it was necessary to help all of us as I was trying to advance my career and support a family.

Dan Reynolds – Interface Security

My previous boss, Director and then later Vice President of Customer Operations, Dan Reynolds was instrumental in my career development and I’d be remiss if I didn’t mention him. He believed in me from years previous when I had been working under him in the Call Center. I continued to work closely with him even after I  officially went into I.T. He took me to the user conference for the software I was managing. It wasn’t cheap and I was grateful. I learned a lot at those conferences and met a lot of great people at Bold Technologies while I was at those conferences. I met a lot of people from other companies working in the same role I was in. Dan helped me pursue education for SQL Server at the Continuing Education and Training Center at the University of Missouri at Saint Louis.

Amy Spurgeon – Interface Security/Bold Technologies

Amy was the support person who encouraged me to apply for her job before she left. She pushed me to consider the possibilities.

Josh Tafoya – Bold Technologies

Josh was a person in the support department at Bold Technologies when I started my application support role. He knows the Manitou software very well. He helped me understand the software better and he taught me a lot about what it meant to be an I.T. professional.

Matthew Narowski – Bold Technologies

Matthew held a number of roles at Bold while I was at Interface and was eventually named either CEO or President at Bold. He was patient and taught me a lot about I.T. support. He explained a great many things to me about the world of I.T. in general.

I am sure there are others who made contributions to my transition into a new career. I thank all of you.

 

I hope my story helps and encourages you along your own journey.

 

 

How to Configure SQL Server Temporal Tables Part 2

 

In the initial post, we talked about some very basic items to get you up and running with temporal tables. In part 2, I’ll cover a few more things to think about and implement when working with temporal tables.

In the first post, we looked at altering an existing table to be a temporal table. Now we will look at what is involved in making a table be a temporal table from the very beginning.

Designing a Temporal Table from Scratch

First of all, you can create a table to be a temporal table from the beginning, without naming the history table. But you get an ugly history table name in the following format:

[dbo].[MSSQL_TemporalHistoryFor_tableObjectId].

In my test case that turns out to be this:

[dbo].[MSSQL_TemporalHistoryFor_1913773875].

Also, notice that in this example below, the SysStartTime and SysEndTime required columns are not designated as hidden. This means they can be returned in queries, so be aware of this, particularly when using SELECT *.

This syntax also results in a clustered index being created on the history table
using the SysStartTime and SysEndTime columns.

Observe that I’m taking advantage of the inline index creation syntax that was new in SQL Server 2014. Look at examples S, T, and U to see more information on how to use this feature.

CREATE TABLE dbo.Employee
(
ID INT IDENTITY(1,1) NOT NULL,
LastName VARCHAR(100) NOT NULL,
FirstName VARCHAR(75) NOT NULL,
JobTitle VARCHAR(50) NOT NULL,
BirthDate DATE NOT NULL,
HireDate DATE NOT NULL,
SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL,
SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL,
 PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime),
CONSTRAINT PK_Employees PRIMARY KEY CLUSTERED (ID),
INDEX IX_LastName_FirstName (LastName,FirstName),

)
WITH (SYSTEM_VERSIONING = ON);

ALTER TABLE dbo.Employee ADD CONSTRAINT DF_SysStart DEFAULT SYSUTCDATETIME() FOR SysStartTime;
ALTER TABLE dbo.Employee ADD CONSTRAINT DF_SysEndTime DEFAULT SYSUTCDATETIME() FOR SysEndTime;

This allows me to create the index on Lastname, FirstName directly in the CREATE TABLE statement and not have to do that afterward in a lengthier syntax.

INDEX IX_LastName_FirstName (LastName,FirstName)

To clean up this example and try a slightly different approach, run this code:

ALTER TABLE dbo.Employee SET(SYSTEM_VERSIONING = OFF)
DROP TABLE dbo.Employee

 

How To Name the History Table

You can create a temporal table and name the history table so it is more human friendly. This syntax also results in a clustered index being created on the history table using the SysStartTime and SysEndTime columns.

If you want to be able to create a clustered columnstore index, or a different clustered index then there are 2 choices:
1. Use this syntax and then drop the clustered index and make the new clustered index.
2. Create both the temporal table and the history table and when you make the history table, and specify the indexes you want to create.

 

CREATE TABLE dbo.Employee
(
ID INT IDENTITY(1,1) NOT NULL,
LastName VARCHAR(100) NOT NULL,
FirstName VARCHAR(75) NOT NULL,
JobTitle VARCHAR(50) NOT NULL,
BirthDate DATE NOT NULL,
HireDate DATE NOT NULL,
SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL,
SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL,
 PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime),
CONSTRAINT PK_Employees PRIMARY KEY CLUSTERED (ID),
INDEX IX_LastName_FirstName (LastName,FirstName),

)
WITH (SYSTEM_VERSIONING = ON(HISTORY_TABLE = dbo.EmployeeHistory));

ALTER TABLE dbo.Employee ADD CONSTRAINT DF_SysStart DEFAULT SYSUTCDATETIME() FOR SysStartTime;
ALTER TABLE dbo.Employee ADD CONSTRAINT DF_SysEndTime DEFAULT SYSUTCDATETIME() FOR SysEndTime;

DROP INDEX IX_EmployeeHistory ON dbo.Employeehistory;

CREATE CLUSTERED COLUMNSTORE INDEX IX_CC ON dbo.EmployeeHistory;

 

Indexing Decisions for Temporal Tables

Per MS Docs, if you are creating this table as a temporal table for the purpose of auditing row changes, then a clustered rowstore index on the  SysStartTime, SysEndtime and the PK columns of the temporal table is a good choice. Otherwise, a clustered columnstore index is a good choice for analytic queries.

How to Handle Data Retention for Temporal Tables

The simplest option for managing data retention on a history table is to use
the HISTORY_RETENTION_PERIOD = syntax. The available time units for history retention are DAYS, WEEKS, MONTHS, YEARS. SQL Server then uses this information to manage the deletion of data from the history table for you. notice below I have added HISTORY_RETENTION_PERIOD = 6 MONTHS to the syntax.

CREATE TABLE dbo.Employee
(
ID INT IDENTITY(1,1) NOT NULL,
LastName VARCHAR(100) NOT NULL,
FirstName VARCHAR(75) NOT NULL,
JobTitle VARCHAR(50) NOT NULL,
BirthDate DATE NOT NULL,
HireDate DATE NOT NULL,
SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL,
SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL,
 PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime),
CONSTRAINT PK_Employees PRIMARY KEY CLUSTERED (ID),
INDEX IX_LastName_FirstName (LastName,FirstName),

)
WITH (SYSTEM_VERSIONING = ON
    (HISTORY_TABLE = dbo.EmployeeHistory,
     HISTORY_RETENTION_PERIOD = 6 MONTHS)
    );

ALTER TABLE dbo.Employee ADD CONSTRAINT DF_SysStart DEFAULT SYSUTCDATETIME() FOR SysStartTime;
ALTER TABLE dbo.Employee ADD CONSTRAINT DF_SysEndTime DEFAULT SYSUTCDATETIME() FOR SysEndTime;

Other options for handling data retention include stretch tables, which place the entire history table, or part of it, in Azure. Partitioning the history table an writing your own custom cleanup script are also options.

Next Steps To Take

  1. Look here for a list of other considerations and limitations.
  2. Think about the tables in your environment where this feature could be useful and do some experimenting.
  3. Talk to your business personnel about this feature and ask them if they know of tables where knowing how the data has changed over time would be helpful. In some cases, you might find that the business is sometimes asked questions about how or when their data changed. This feature is perfect for answering that question.
  4. If you would like help with anything in this post, or with something else related to SQL Server, reach out to me here, or on Twitter, and I’ll be glad to offer assistance.

How to Configure SQL Server Temporal Tables

Temporal tables were a new feature in SQL Server 2016. They allow for the tracking of changes without having to set up change tracking, change data capture or triggers.  for certain uses they are not a replacement for something like change data capture, but would work if the purpose is simply having a record of changes. Temporal tables can also be used for auditing changes to data as well as  point-in-time analysis where users might want to compare data changes over time, or even at a specific point in time.

You can create a new table as a temporal table from the very beginning. However, I think a more common scenario is going to be converting existing tables to be temporal tables so I will start there.

The Initial setup

I’ll make a backup of the Production.Products table. I do this as a general practice to keep from having to restore the database to return things to normal after demos and experimentation. This way, I can drop the backup of the table and move on.

USE AdventureWorks2016CTP3;
GO

DROP TABLE IF EXISTS Production.ProductBackup
SELECT TOP (1000) [ProductID]
      ,[Name]
      ,[ProductNumber]
      ,[MakeFlag]
      ,[FinishedGoodsFlag]
      ,[Color]
      ,[SafetyStockLevel]
      ,[ReorderPoint]
      ,[StandardCost]
      ,[ListPrice]
      ,[Size]
      ,[SizeUnitMeasureCode]
      ,[WeightUnitMeasureCode]
      ,[Weight]
      ,[DaysToManufacture]
      ,[ProductLine]
      ,[Class]
      ,[Style]
      ,[ProductSubcategoryID]
      ,[ProductModelID]
      ,[SellStartDate]
      ,[SellEndDate]
      ,[DiscontinuedDate]
      ,[rowguid]
      ,[ModifiedDate]
   INTO Production.ProductBackup
  FROM [AdventureWorks2016CTP3].[Production].[Product]

 

Now that we have a our table that we want to convert to a temporal table, let’s suppose that AdventureWorks wants to raise the price of all products in the Accessories product sub-category. The below will get that initial data and confirm that our expression is right prior to updating the actual data. It also will provide us with the knowledge of how many rows we should expect to see changed.

 

SELECT P.[Name] AS ProductName, P.ProductNumber, P.ListPrice, CAST(P.ListPrice *1.10 AS DECIMAL(9,2)) AS NewListPrice ,PC.[Name] AS ProductCategoryName, PSC.[Name] AS ProductSubCategoryName
FROM Production.ProductBackup AS P
INNER JOIN Production.ProductSubcategory AS PSC ON P.ProductSubcategoryID = PSC.ProductSubcategoryID
INNER JOIN Production.ProductCategory AS PC ON PC.ProductCategoryID = PSC.ProductCategoryID

WHERE PC.[name] = 'Accessories'
ORDER BY ProductCategoryName, ProductSubCategoryName

 

Here is a partial screenshot of the results. Twenty-nine rows were returned.

 

AdventureWorks ProductSubCategory Accessories
Price increase for Accessories product subcategory

A sometimes recommended practice for temporal tables is to create a separate schema to hold the history table, so we’re going to do that.

/*
Create a new schema to hold the history table
*/
CREATE SCHEMA History;

 

Enabling Temporal Tables In SQL Server

Now we will set up system-versioning for our existing table. Two new columns need to be added to the source or “temporal table.” These must be specified as DATETIME2 and can either be hidden or not. If the keyword HIDDEN is used, then these two columns do not show up at all when this table is queried. You might not want the columns to appear in query results because you might be running SELECT * against the table, even though you know not to do that in a production system, right? Also, if the columns are added as HIDDEN, then you don’t have to account for these columns when you do INSERTs.

Once the table has been altered as needed with the new columns, and we’ve either done what I did here and specify defaults for those columns or explicitly defined them and NOT NULL. We also need to add the  PERIOD FOR SYSTEM_TIME statement. After that, there is one additional ALTER TABLE statement to turn on system versioning.

/*
Enable SystemVersioning on existing table
*/

ALTER TABLE [Production].[ProductBackup]
  ADD SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START HIDDEN
         CONSTRAINT DF_SysStart DEFAULT SYSUTCDATETIME()
      , SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END HIDDEN
            CONSTRAINT DF_SysEnd DEFAULT CONVERT(DATETIME2, '9999-12-31 23:59:59.9999999'),
        PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime);

ALTER TABLE [Production].[ProductBackup]
SET (SYSTEM_VERSIONING = ON(HISTORY_TABLE = History.ProductBackup));

What Happens When You Modify Data In a Temporal Table?

Let’s update some rows to populate the history table.

/*
Update some rows to populate the history table. 
We will raise the price of everything in the Accessories category by 10%.
Should be 29 rows changed.
*/

BEGIN TRAN
  UPDATE Production.ProductBackup
  SET ListPrice = CAST(ListPrice *1.10 AS DECIMAL(9,2))
  FROM Production.ProductBackup AS P
  INNER JOIN Production.ProductSubcategory AS PSC ON P.ProductSubcategoryID = PSC.ProductSubcategoryID
  INNER JOIN Production.ProductCategory AS PC ON PC.ProductCategoryID = PSC.ProductCategoryID
  WHERE PC.[name] = 'Accessories';
  
COMMIT TRAN

 

Let’s examine one method to see the current price in the source table and the previous price from the rows now in the history table. This will also show the system start and end times for those rows. This date/time range shows when this rows was valid and present in the source table.

 

/*

One method to show the old data and the new data
*/
SELECT PB.[Name] AS PBName, PB.ListPrice AS CurrentListPRice, 
PBH.ListPrice AS PreviousListPrice, PBH.SysStartTime, PBH.SysendTime
FROM Production.ProductBackup AS PB
INNER JOIN History.ProductBackup AS PBH ON PB.ProductID = PBH.ProductId --AS PB
--FOR SYSTEM_TIME ALL 
WHERE PBH.ProductID IN
(
/*These were the 29 IDs I knew we had changed because we returned them in the previous SELECT query
*/
707,708,711,842,843,844,845,846,847,848,870,
871,872,873,876,877,878,879,880,921,922,923,
928,929,930,931,932,933,934
)
ORDER BY PBH.ProductID;

 

Here is a partial screenshot of the results. This table indicates that these rows existed in the source table between August 24th at 04:01:00 and August 24th at 04:01:29.

 

Viewing SQL Server Temporal Table History
A view of SQL Server temporal table history

 

Data Retention for SQL Server Temporal Tables

One additional thing to consider when setting up temporal tables is data retention for the history table. There are several methods to manage data retention, but I think the simplest is to set a retention policy on the table itself and let SQL Server handle the data cleanup. By default the history table will store data indefinitely.

First, check to see that data retention is enabled in the database by querying the column is_temporal_history_retention_enabled column in sys.databases. If it is not enabled, then use the below to do that, remembering to set the database context for your environment within the script.

ALTER DATABASE [<myDB>]
SET TEMPORAL_HISTORY_RETENTION ON

To alter an existing temporal table so that history retention is being managed you can do the below. Keep in mind that values with DAYS, WEEKS, MONTHS, and YEARS are all valid for history table retention.

ALTER TABLE Production.ProductBackup
SET (SYSTEM_VERSIONING = ON (HISTORY_RETENTION_PERIOD = 9 MONTHS));

 

Next Steps to Take

  1. Head over to the MS Docs to learn more about temporal tables in SQL Server.
  2. If you need to know WHO made a change as well as what was changed then check out this post from Aaron Bertrand.
  3. Look for subsequent posts from me that will cover more about temporal tables.
  4. If you would like help with anything in this post, or with something else related to SQL Server, reach out to me here, or on Twitter, and I’ll be glad to offer assistance.

Top 3 Most Popular Posts of 2020

So, I realize it’s now almost April of 2021 when I’m posting this. Normally this sort of post would come out in January for the previous year. However, I have a reason for this being late and I’ll actually post about that separately.

Showing My Blog Some Love

In mid-2020 I decided to give my blog more attention. there were a number of reasons for this. First, I’d already spent a fair amount of money over the years on hosting and related things so I didn’t want to waste that. Second, I really wanted to try to start giving back to the SQL Server community. I have been the recipient of a lot of learning and help through a number of regular, and occasional, bloggers. I have also purchased and mostly read my fair share of SQL Server books, which was another way I had been helped by the community. Third, I wanted a place to record my own troubleshooting and learning so when I needed to do something again, I would have a record of what to do. By the way, I have seen this repeatedly listed as a reason that someone blogs.

Blog Posts by Month

So, here is the break down of the number of posts by month for 2020:

  • April – 1
  • July – 1 This was the last week of the month and marks my decision to start blogging more consistently.
  • Aug – 3
  • Sept – 4
  • Oct – 4
  • Nov – 7 – I had built a good backlog of posts that were ready to publish and nearly reached my goal of posting twice a week for the entire month
  • Dec – 4

To get the list of the most popular blog posts I looked at Google Analytics PageView data for Jan 1, 2020 – Dec 31, 2020.

Popular Post # 1

The most viewed page based on this data had 354 views and was part 1 of a series on PowerShell for the DBA. The series was meant to show some “getting started” type techniques so that someone could open PowerShell and start exploring what the tool could do for them as a DBA. I explored how to find commands that might be of interest and how to use the help system in PowerShell to figure out how to use the command. I also provided some practical examples of PowerShell to use in every day scenarios.

https://leemarkum.com/archive/2020/11/introduction-to-powershell-for-the-dba-part-1/ 

Popular Post #2

The second most viewed page had 332 views and covered 5 different ways to make a SQL Server backup. Backups are so important, especially in the world of data. They give you and your business a way to recover data that has been lost or damaged in some way. As a Database Administrator, you need to be able to recover and that all starts with taking backups. That seems obvious but I still read stories of businesses that aren’t even backing up their databases. If you do have backups, you also need also to test whether you can use those backups to actually restore data.

https://leemarkum.com/archive/2020/09/5-ways-to-make-sql-server-backups/ 

Popular Post #3

The third most viewed post had 273 views and was about using a new feature in SQL Server 2019. This feature is based on the polybase technology that arrived in SQL Server 2016. However, the feature has now been extended to allow SQL Server to have external tables to ole db and odbc sources, such as SQL Server, MS Access, and, yes, even things like Oracle and IBM iSeries/AS400. The external, or virtual table, uses mostly built-in drivers. With this feature yo can run SELECT statements against remote data sources without a linked server. The environment I’m in has an AS400/IBM iSeries so I wanted to see if I could get the feature to work with that data source.

https://leemarkum.com/archive/2020/04/querying-an-as400-using-sql-server-2019-data-virtualization/

There you have it. I hope you enjoyed these posts in 2020, and if you haven’t seen them, give them a click and read through the information. I hope you learn something from the posts and if you have questions or comments reach out to me here, or on Twitter, and I’ll be happy to talk to you.

Insufficient System Memory – Failed Allocate Pages

 

Unable to Start SQL Server

In my own local SQL Server I ran across a problem starting the SQL instance. I went to SQL Server configuration Manager and manually started the SQL Server instance. The UI showed the instance had started. I opened SSMS and tried to connect. And I waited, waited and waited some more until it didn’t connect and threw an error.

How To Locate the SQL Server Error Log

I knew I could look at the SQL Server Error Log to get more information on what had gone wrong.  I browsed to the directory where the log file is. For any system that is found by going to the drive where the binaries have been installed and go to \Program Files\Microsoft SQL Server\SQLServerVersionNumber.InstanceName\MSSQL\Log. In that location you will look for files like Errorlog, errorlog.1 through errorlog.6 The file with no number at the end is the most recent Error Log. Double click it and open in your favorite viewer. As I read through the file, I came across the below.

Insufficient System Memory – Failed Allocate Pages

 

Failed Allocate Pages - Insufficient System Memory

Also, you can read the SQL Server Error Log using the extended stored procedure xp_readerrorlog. Here is an example that shows the memory related messages from the SQL Server Error log. Of course, you can only do that once your SQL Server is running.

xp_readerrorlog 0, 1, "memory"

Then the next thing was, how do I investigate this since I can’t start the SQL Server?

Startup Parameters for SQL Server

SQL Server has startup parameters that can be added to the SQL Server Configuration Manager to control what happens when SQL Server starts. I was looking for something that would help me get SQL Server started and let me poke around. At first I tried -m for single user mode but SQL Server still wouldn’t start. Then I found the -f parameter and it sounded like just what I needed.

“Starts an instance of SQL Server with minimal configuration. This is useful if the setting of a configuration value (for example, over-committing memory) has prevented the server from starting. Starting SQL Server in minimal configuration mode places SQL Server in single-user mode. For more information, see the description for -m that follows.”

Now that I had that information I opened SQL Server Configuration Manager. I located the SQL Server instance I couldn’t start and right clicked it. I then chose properties and typed -f in the Startup Parameters tab.

I then clicked Add to the right of the startup parameter I just added. I clock Ok and I’m prompted that I need to restart the SQL Server service to make this change effective. So, I restart SQL Server and attempt my connection again, and I’m in!

SQL Server Configuration Manager Startup Parameters

 

Because this was an insufficient memory error, I right clicked the name of my SQL Server instance and selected Properties then Memory. I see I only have 1024 MB assigned to the SQL instance for Max Server Memory. I increased it to 3072 MB and clicked Ok.

I went back to SQL Server Configuration Manager and removed the -f startup parameter and restarted SQL Server. Now I am able to start up SQL Server and connect with no issues.

What To Do Next

  1. Go back to the link on SQL Server Startup Parameters and familiarize yourself with what is in there. You never know when you might need functionality from a startup parameter to get you past a problem.
  2. Do some research on Max Server Memory.
  3. If you would like help with anything in this post, or with something else related to SQL Server, reach out to me here, or on Twitter, and I’ll be glad to offer assistance.