Steps to Catch Up Log Shipping Secondary

 

If you are using Log Shipping, there have probably been times that you have discovered that the Log Shipping Secondary is not up to date.  Perhaps you found out by right clicking on your SQL Server instance and running the performance dashboard report for Log Shipping Status.  If you are not familiar with this report check here

Perhaps there is a Reporting team that uses that secondary and they met you at your desk upon arrival, informing you that the database is not online so they can’t work yet. So what do you do when you realize that the secondary is so far behind that it is probably not going to catch up on its own?  This process assumes that you have a Log Shipping config set up to run a Log Shipping Restore job just prior to the beginning of the business day and that job leaves the database in Standby mode so it can be read from.  At the end of the business day the log shipping restore job has one or more schedules set to  kick off restores for the transaction logs during the business day.

Here is a process you can use. 

1.       Disable the Copy Job on the secondary that is behind. 

2.       View the previously mentioned Log Shipping Status report and determine which file is the next in line to be restored. 

3.       Create a folder in the path where the transaction log backups are being copied to and perform a Cut operation on all the files just after the current file being restored. 

4.       Paste those files into the folder you made in step 3. 

Here is what you have accomplished with this.  You have set up the database to go to Read Only/Standby once the current file is done restoring.  It can’t continue the restore process because it has reached the last transaction log file in the original folder path. If the Reporting team can live with the data as is, then they can get to work while you set up the next part that will allow you to catch it up.  If it still is not usable for the Reporting team because the data has to be more current, then you have still helped yourself because you have to get the database out of recovery mode so you can do the next part to catch it up.

5.       Take a differential backup of the current database. 

6.       While that is running, open SQL Server Management Studio to set up your script to restore the most recent full backup and the differential you are making.  Remember to consider that you may have to use the WITH MOVE part of the RESTORE database syntax because the location of your files on the secondary may not be the same as on your primary.  Also, if your Log Shipping config is designed to leave the database in Standby then you will need to use the WITH STANDBY clause and then name the .tuf file where the redo information is kept.

7.       If the Reporting team started querying after step 3 you will need to ask them to disconnect because you can’t restore a database while other sessions are connected.  Alternatively you could use ALTER MYDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE at the beginning of your RESTORE script but that would be inconsiderate.

8.       Once the secondary is done restoring the most recent transaction log it was on, start the restore scripts you built in step 6. Once this step is done, you can inform the Reporting team that the database is ready to be queried.

9.       Cut and paste the files from steps 3 and 4 so they are back in the target folder for the copy job. 

10.   Re-enable the Copy job. Once step 8 is done, you can inform the Reporting team that the database is ready to be queried. 

Three Reasons to Use a Custom Database Role

Ever had a situation where a small in-house application written for a small group of employees needed some permissions in the database? How do you handle that? I want to walk through the scenario and provide three reasons that using a Database Role might be the answer you are looking for.

Let’s first cover, at a high level, the different levels of permissions in SQL Server and the associated commands. In SQL Server there are Server level permissions for the SQL instance and there are database level permissions. Within the database level there are also specific permissions for objects like tables, views and store procedures. There are also database permissions that can be grouped together in a database role. In fact, there are some built-in database roles for SQL Server as shown below. If you aren’t familiar with these, you can find out about them here.

Database Roles

 

As an example, suppose there is an application being written for a small department in your company. The application needs access to your company database, but not access to everything in the database. Perhaps the app only touches a few tables. Someone could grant the Server level permission of SysAdmin to a Windows account or Windows group that contains the necessary people added to it in Active Directory. That would be a “sledge hammer” approach to providing permissions and break the principle of least privilege. In this scenario, a database role could be used to accomplish the task at hand.

On your domain you or the System Administrator can create a group called MyDomain\DepartmentName. With the pseudo-code below as a model, you can create the Login and User.

USE master;
GO
CREATE LOGIN [MyDomain\DepartmentName] FOR WINDOWS
USE MyDatabase;
GO
CREATE USER [MyDomain\ DepartmentName] FOR LOGIN [MyDomain\ DepartmentName]

In this new AD group, you place the members of the department. This allows the department members the ability to use their already familiar Windows passwords, keeping it easier for them. Using Windows groups also makes it easier for you. If you aren’t convinced of that read this post.

At this point is when some people might still take the approach of giving big permissions by mapping the created database user to the db_owner role in the database and be done with the process of permissions. However, do you really want a group of people in a department to have complete control over the entire database. Somehow I doubt it, and I doubt your auditors want to see that either.

Permissions could be granted to the user account itself inside SQL Server, but what if other people who are not yet in the appropriate AD group want access? Now you either have to add them to that AD group, which might now make sense from an organizational perspective, or you have to start this process all over again with a new login and user. This is where Database Roles come in. Below is example code on how you would do that based on the process we already have in place.

USE MyDB;
GO
CREATE ROLE MyNewDBRole;
GO
ALTER ROLE ADD MEMBER [MyDomain\ DepartmentName];
GO
GRANT EXECUTE ON [dbo].[GetEmployees] TO MyNewDBRole;
GO
GRANT EXECUTE ON [dbo].[InsertEmployee] TO MyNewDBRole;
GO
GRANT EXECUTE ON [dbo].[DeleteEmployee] TO MyNewDBRole;
GO
GRANT EXECUTE ON [dbo].UpdateEmployee] TO MyNewDBRole;
GO
GRANT EXECUTE ON [dbo].[GetSales] TO MyNewDBRole;

 

So what are some reasons to use a process like this? First, the application will follow the principle of least privileges. This means that the application and the users of the application will only have the permissions required for the functionality of the application. This keeps your SQL Server more secure, which makes auditors happy.

Second, this type of process eases administration. When the application is expanded and more people begin to use it and for different purposes, just create a group like the first one demonstrated and add the group to the role. Grant any additional permissions needed to the role. No adding individual logins to the SQL Server and no applying permissions to many individual logins, which would be a pain. Now you have a single point of permissions administration within SQL Server – the Database Role. Additionally, if there are new employees in the department who need access to the application, simply add those people to the AD group and you’re done!

Third, you and the System Administrator get some face time!! Who doesn’t like a little collaboration?

Retiring sp_who and sp_who2

With the advent of SQL Server 2005 came along new objects called Dynamic Management Views, also known as DMVs. One of the great places where improvements were made by adding the DMVs is when investigating what is happening on a SQL Server at the moment production troubleshooting is required. When things go south and the DBA is contacted the new DMV’s are invaluable. Despite this, there are still some folks out there using sp_who and sp_who2. With this post I want to demonstrate why you should stop using sp_who and sp_who2.

To demonstrate this, we need a blocking transaction. I connected to AdventureWorks2014 on my local instance and in one query window I have the first query and in a second window the second query. Execute these queries and blocking will occur.

 

    BEGIN TRANSACTION
      UPDATE [Person].[Person]
      SET FirstName = 'Ken'
      WHERE BusinessEntityID = 1
    --COMMIT TRAN

      BEGIN TRAN
      UPDATE [Person].[Person]
      SET FirstName ='Jason'
      WHERE BusinessEntityID = 1
      --COMMIT TRAN

Now that you have a blocking transaction, open a third window in SQL Server Management Studio (hereafter SSMS) and run the below.

 

sp_who;
GO
sp_who2;
GO

SELECT 
  ES.session_id AS BlockedSession
, BlockedSQL.text AS BlockedSQL --Not in sp_who or sp_who2
, ES.login_name
, ES.program_name
, ES.host_name
, ER.blocking_session_id
, BlockingSQL.text AS BlockingSQLText
, ER.cpu_time
, ER.logical_reads  --Not in sp_who or sp_who2
, ER.reads --Not in sp_who or sp_who2
, ER.writes --Not in sp_who or sp_who2
, ER.lock_timeout  --Not in sp_who or sp_who2 
--Lock time-out period in milliseconds for this request
, ER.row_count --Not in sp_who or sp_who2 
/*Number of rows that have been returned to the client by this request.
*/
, ER.total_elapsed_time --Not in sp_who or sp_who2
, CASE CAST(ER.transaction_isolation_level AS VARCHAR(20)) 
  WHEN '0' THEN 'Unspecified'
  WHEN '1' THEN 'ReadUncomitted'
  WHEN '2' THEN 'ReadCommitted'
  WHEN '3' THEN 'Repeatable'
  WHEN '4' THEN 'Serializable'
  WHEN '5' THEN 'Snapshot'
  END AS transaction_isolation_level --Not in sp_who or sp_who2
, ER.wait_type --if currently blocked this will have a value
, ER.wait_time  --Not in sp_who or sp_who2
, ER.last_wait_type --Not in sp_who or sp_who2 
/*If session was previously blocked, this will show what the wait type was.
*/
, ER.wait_resource --Not in sp_who or sp_who2.  
/*Can use wait_resource and expand this query to use other DMVs to find what table and index is being waited on, for example.  
*/
, ER.deadlock_priority --Not in sp_who or sp_who2
, DB_name(ER.database_id) AS DBName  
, ER.executing_managed_code  
/*Not in sp_who or sp_who2 --Indicates whether a specific request is currently executing common language runtime objects, such as routines, types, and triggers. It is set for the full time a common language runtime object is on the stack, even while running Transact-SQL from within common language runtime
*/
, EC.client_net_address AS ClientIPAddress 
--Not in sp_who or sp_who2

FROM sys.dm_exec_sessions AS ES
INNER JOIN sys.dm_exec_requests AS ER ON ES.session_id = ER.session_id
INNER JOIN sys.dm_exec_connections AS EC ON ER.blocking_session_id = EC.session_id
CROSS APPLY sys.dm_exec_sql_text(ER.plan_handle) AS BlockedSQL
CROSS APPLY sys.dm_exec_sql_text(EC.most_recent_sql_handle) AS BlockingSQL

From sp_who and sp_who2 you should get an output similar to below where blocking information is provided related to SPIDs 52 and 53.

Click to enlarge.

sp_who and sp_who2

The output from the query of the DMVs looks like the below.  The results have far more columns than van be displayed in a screen capture.

Click to enlarge.

Blocked and Blocking SQL

Please note that there is very little difference between the data in sp_who and sp_who2. However, there is a big difference between those older object calls and the third script. Out of the myriad of columns returned by the third script, very few are returned by sp_who or sp_who2.

Be sure and go back to SSMS and commit the two queries involved in the blocking to clean up the demo.

Some items of interest in the third script that are not in the older sp_who or sp_who2 are:

  1. The SQL statement being ran by both the blocked and blocking process.
  2. Number of logical (in memory) reads
  3. Break down of reads versus writes
  4. Whether or not the requests are executing in CLR code
  5. Wait Resource: This shows the DBA things like whether there is a KEY or Page wait occurring and where.
  6. Isolation level of the statements involved
  7. IP Address of the client involved

By using the above third query the SQL statements of both the blocked and blocking session can be retrieved without the extra step of running DBCC INPUTBUFFER as would be needed when using sp_who or sp_who2. This will save the DBA valuable time when troubleshooting a live issue.

The DMV’s are just so much more powerful than calls to sp_who and sp_who2 that are dependent on sys.processes, which has been deprecated for some time now. I would strongly encourage experimentation with the new DMV’s. They will be a valuable tool in the DBA toolbox.

Three Places to Get Information On New Updates for SQL Server

There are many consistently hot topics around SQL Server. One such topic is, “How does a DBA obtain the latest information on updates?” In this post, I will discuss three places that I frequent for information on SQL Server updates.

The first link I want to discuss is http://sqlserverbuilds.blogspot.com/. This site gives you a very comprehensive page of data. Initially there will be a summary grid that lists a SQL Server release down the left side and across the top will be release names like RTM, SP1, SP2, etc. Inside the grid will be the corresponding release number for a particular release and that release number will be a link to the corresponding download.

Below that summary grid will be similar grids for individual releases. The difference is that in the grids for individual releases the reader will find links to hotfixes, cumulative updates and service packs. There is a grid for each release go back to SQL Server 7.0 and there are even links to 6.5 and 6.0 builds! This site does not have an RSS subscription option.

The next link is http://sqlserverupdates.com/ and is maintained by the people of BrentOzar.com. If you are not familiar with them, they are a consulting company started by Brent Ozar, Kendra Little and Jeremiah Peschka. Kendra and Jeremiah recently departed the company to pursue other interests, turning the reigns over solely to Brent Ozar.

This site goes back to SQL Server 2005. The matrix provided shows the reader the SQL Server version, then the latest updates and when support ends for that version. There is a separate link for updates other than the latest one for each edition. Clicking the link for Other SQL 2014 Updates, for example, opens a separate page that shows much the same information as the edition grids on http://sqlserverbuilds.blogspot.com/. On this site there is also a FAQ that addresses things like what is the difference between a hotfix, cumulative update and a service pack. This site does have an option to subscribe to notifications about new updates for SQL Server.

The last link is http://blogs.msdn.com/b/sqlreleaseservices/. It is written by the SQL Server Engineering Team. Rather than being organized in a grid style, this site lists information chronologically as posts are written and published. As a result, there is a mixture of posts about the various edition releases all on the same page as separate blog posts. Readers can subscribe with Live Bookmarks.

If you think you don’t need to concern yourself with keeping up to date on SQL Server patches, then please take a look at this post.

I hope this has been helpful and I wish you happy patching!

 

Moving Database Files to Another Drive

You may occasionally have the need to relocate database files. What are some reasons that may prompt the need for this type of change? First, someone might have discovered that the database files are on the same drive as the operating system. For a discussion of why you don’t want to do this see this post. Second, your database might simply be outgrowing its current drive. Third, perhaps someone installed the transaction log file on the same drive as the mdf file.

So, after the need to do this has arisen, how is the task actually done? The first step, which may have already been done as part of a process that discovered the issue in the first place, is to confirm the current location of your database files. There are at least two ways to approach this. First, if you want to look at all the information on the SQL Server instance then use the below query that utilizes sys.masterfiles.

SELECT

MF.name AS LogicalFileName

, physical_name AS FilePath

FROM sys.master_files AS MF

A second way to do this is to review the information for a single database. For that, the query below can be used. Note I am looking for AdventureWorks2008R2 in my particular query.

SELECT
D.name AS DBName
, MF.name AS LogicalFileName
, MF.physical_name AS FilePath

FROM sys.master_files AS MF
INNER JOIN sys.databases AS D ON MF.database_id = D.database_id
WHERE D.name = 'AdventureWorks2008R2'

 

Click to enlarge.

FilesNames and Paths

Once this is complete the next step is to verify that the target drive for the new file is large enough to hold the drive that you plan on moving. You really don’t want to get in the middle of this change only to discover that the target drive does not have enough room to hold the file you want to move.  In this case I will be demonstrating how to move a transaction log file and so I will use a DBCC command to return space information.

DBCC SQLPERF(LOGSPACE)

 

DBCC SQLPERF LOGSPACE

 

The DBCC command gives you the current size of the log in megabytes, as well as other useful information like the percentage of space used in that log file. In this case my transaction log file is very small at 3.36 MB for the AdventureWorks2008R2 database.

Another way to see this information for a specific database would be to right click the database name in SQL Server Management Studio and select Tasks > Shrink > Files. Switch the display to the Log file and the dialog will show the currently allocated space for the transaction log file as well as the free space in that file at the moment.

Once you have confirmed that the target drive will hold the file you want to use the next step will be to take the database offline so that the file you want to move can be copied to the target drive. Use the script below to offline your database, replacing AdventureWorks2008R2 with the name of your database.

ALTER DATABASE AdventureWorks2008R2 SET OFFLINE WITH ROLLBACK IMMEDIATE;

 

Now using Windows Explorer, copy the file from its current location to the new location and run the below query. for this demonstration I simply made a new folder called Logs in the default path like this: ‘C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Logs

ALTER DATABASE AdventureWorks2008R2

MODIFY FILE (NAME = AdventureWorks2008R2_log, FileName = 'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Logs\AdventureWorks2008R2_log.ldf');

Then bring the database online.

ALTER DATABASE AdventureWorks2008R2 SET ONLINE;

That’s it. The files have been successfully moved. Congratulations! Performance has been improved or a disaster with a full drive has been averted.

Setting Min and Max Server Memory Setting

Configuring the min and max server memory settings for SQL Server has a number of things to consider prior to making changes. Before we jump into those considerations, let’s learn what the min and max server memory options do.

 

MIN SERVER MEMORY

The min server memory option specifies the minimum amount of memory utilized by the SQL Server buffer pool. It may seem that this should mean that when SQL Server starts it will automatically grab the memory equal to the min server memory value. However, this is not what happens. Instead, SQL Server acquires system memory as needed to process a workload. Consequently, if the SQL Server workload remains low, it is possible that the min server memory value is never reached. However, once it is reached SQL Server will not release an amount of memory that would take it below that minimum setting.

For example, let’s suppose your server has 128 GB of RAM and the min server memory option were set to 32 GB for SQL Server. SQL Server does not immediately allocate 32 GB of RAM upon serve startup. It will acquire memory as needed for the workload. This means that if there is little activity in the database it may take some time before SQL Server allocates 32 GB of RAM. Once this happens, SQL Server will not release memory that would reduce the allocation below the min serve memory setting.

 

MAX SERVER MEMORY

To a certain point the max server memory setting does do what it sounds like. Keep in mind this setting is applicable only to the buffer pool. SQL Server can allocate some memory beyond this max number as it consumes memory for things outside of the buffer pool. Also, realize that this setting does not account for memory allocated to other components of SQL Server that may be installed and running, such as Reporting Services or Analysis Services.

 

The default value for Max Server Memory is 2,147,483,647 in MB. That number is rather large, 2,047.99 GB to be exact. So what happens if the machine SQL Server is installed on does not have that much memory and the default was not changed? As previously mentioned, SQL Server will continue to acquire additional memory based on workload. As a result, SQL Server could make requests of Windows that puts the box under memory pressure as it essentially attempts to choke the OS by leaving less and less memory for the OS. There are mechanisms in the OS that allow it to page out to disk some information in the SQL Server buffer pool so Windows can recover that memory for itself. This hurts SQL Server performance because disk IO is far slower than keeping operations in memory. Among other places, details on how this process works can be found here, which is a blog post discussing the Lock Pages In Memory feature. I will save that setting for discussion a later post.

 

The point I want to make here is that you can calculate how much memory the Windows OS and other applications may need and then subtract that from the physical memory available. There are recommendations on the internet for such things. Whatever number you arrive at would then be set as the Max Server Memory setting in SQL Server. The Available MB perfmon counter should then be judiciously monitored to ensure that there is adequate free memory left on the box, at least 1-2 GB. That will allow plenty of memory for that person who just insists on using an RDP connection to do his or her work or for a new process to fire off without bringing the server to its knees.

 

For example, let’s say that your server has 128 GB of RAM. You may decide to allocate 12 GB for the OS. Perhaps you have a vendor app that needs 4 GB. You take 128-(12+4) and that leaves 112 GB of RAM. If you are running SQL Server 2008 R2 then the Max Server Memory setting can be set to a max value of 64 GB. That is per SQL instance. SQL Server 2012 Standard supports a maximum of 64 GB and SQL Server 2014 Standard supports 128 GB. If SQL Server 2014 Standard were installed in this scenario there would be a maximum of 112 GB that could potentially be allocated to the SQL Server buffer pool for the machine described above.   Setting this limit at 112 GB, followed by the aforementioned monitoring of the Available MB performance monitor counter, would help ensure that SQL Server does not hog memory from the Windows operating system.

 

Now that there is a foundational understanding of what the Min Server Memory and Max Server memory settings are and how they work, let’s look at how to set these options in SQL Server.

 

First, there is the GUI. Open SQL Server Management Studio and connect to a SQL Server instance. Right click on the SQL Server instance name and then choose Properties. Now select the Memory page in the left pane  Manually change the values in the Minimum and Max Server Memory boxes.  Remember that this is in MB.  Then click OK in the lower right of the dialog box.  Also, remember that when you make changes in SSMS you can click the Script button at the top of the dialog and T-SQL for the particular changes will be generated for you.

 

Min and Max Server Settings

 

Second, these settings can be changed using T-SQL.  Here is an example script that sets the minimum to 12 GB and the Max to 32 GB.

 

EXEC sys.sp_configure N’show advanced options’, N’1′ RECONFIGURE WITH OVERRIDE

GO

EXEC sys.sp_configure N’min server memory (MB)’, N’12288′

GO

EXEC sys.sp_configure N’max server memory (MB)’, N’32768′

GO

RECONFIGURE WITH OVERRIDE

GO

EXEC sys.sp_configure N’show advanced options’, N’0′ RECONFIGURE WITH OVERRIDE

GO

 

I encourage you to review these settings on your SQL Server instance and consider whether they need to be adjusted.

Use Windows Groups to Manage SQL Server

Ever come across a SQL Server that had 5, 10, or more individual logins for users in it? I have a few times and it made me wonder if whoever had done such a thing realized how much harder they were making the administration of the server by doing this. I suppose some folks start out with one or two people that they give special permissions or access directly to the SQL instance and underlying databases. Their logic probably goes something like, “It’s just Tommy and Suzie. Those are the only people who will need access for a long time.”

Initially I have to agree with this thought. It does seem simpler when you only have a couple of people to simply create a login and corresponding user for those people. However, SQL Server access and permissions rarely stay that simple. Hopefully you are in a growing company. A growing company will eventually employ additional people who either truly need or believe that they need access to the database. If you are already in the habit of granting direct access using individual logins and database users, it will be tempting to continue this pattern. Before you know it, one day you connect to SQL Server and there is an ever increasing list of individual logins that suddenly catches your attention, and most likely there are a variety of permissions applied to those logins and users. This becomes a management nightmare.

In steps Windows groups. In a typical environment there will be Developers, one or more DBA’s and perhaps folks who write SSRS reports or who do ad hoc type reporting. These groups most likely all need different permission sets. Create a Windows group in AD, or have your System Admin do it. (You are on good terms with her, right?) Ask the SysAdmin to add the developers to a MyDomain\Dev group. Ask her for MyDomain\SQLAdmins and perhaps MyDomain\Reporting.

Now in SQL Server add these Logins and map them to appropriate database Users. For the Dev group maybe you give them only Public permissions in production at the SQL instance and db_datareader to various databases. Now, do the same for MyDomain\Reporting. Why have two groups with the same permissions you ask? Well, it’s a management thing. See if you throw them all into the same group when people on the Dev team need slightly different permissions for something now that permission change would be applied to your reporting people too, and you may not want that.

Back in SQL Server 2008 the BUILTIN Administrators group was removed from SQL Server as a group that was granted automatic access to the SQL Server instance. This was done to enhance security. Given this change, you might want to create a group specifically for your DBA’s and grant them the sysadmin server role. This is where MyDomain\SQLAdmins come in. You can grant them the needed access to manage the SQL instance without making those folks admins on the machine. The Windows SysAdmins can retain control of the machine without automatically being granted control in the SQL instance to do things that the DBA’s would frown upon. Everyone is happy, right?

Now let’s suppose that the company has a new project and three Developers are being hired. In the past the DBA might make three separate logins and map those logins to database users. Now that MyDomain\Dev exists just ask your System Administrator to add those new developers to that group while they are in Active Directory creating Windows accounts for the new employees and presto, permissions are set before the new employees show up on the first day. Prod permissions are set and you have of course already added the MyDomain\Dev group to the development SQL instance in the sysadmin server role so permissions are set there as well. All of that without ever touching SQL Server to grant individual permissions to people. You’ve just saved yourself some time so you can go do something else.

I hope this has been insightful and encourages you to start thinking ahead now, even while your SQL Server environment is still relatively small. I think starting with Windows groups from the beginning will save you time and headache later.

3 Additional Tips for Using SQL Server Management Studio

In an earlier post I listed three tips for using the SQL Server Management Studio user interface. I want to offer three additional tips for SSMS.  I think you will find that these ideas will bring additional efficiency and functionality to your use of the interface.

The first tip is related to getting more viewing space after running a query. Query results obviously take up the bottom portion of the SSMS editor window. I have often found myself then clicking in the query editor window and scrolling to view various parts of my query, all the while the results portion of the screen is using a good portion of the window. Some time back an associate told me how to easily address that annoying issue. Simply click CTRL and R to collapse the query results so that the full page is again used for viewing the query text.

The second tip is related to finding objects in Object Explorer. Ever found yourself trying to locate a particular stored procedure that has a name that is a mile long and your brain only remembers part of the procedure name? What about trying to find all the tables related to Orders or Shipping? As it turns out, there is a nice way to search for such things.

Drill down into the object level you are searching through, such as Databases > AdventureWorks2008R2 > Tables. Right click the Tables folder and select Filter. Then select Filter Settings. You are then presented with the dialog below. Enter the name description in the Value column and click OK. This filtering also works for the Views, Stored Procedures and Functions folders.

Filter Search on a Table

 

The third tip relates to using color to visually identify what SQL Server instance the query window is connected to. I once saw a Developer who had five SQL Server Management Studio windows open at once. When another Developer asked him why he answered that it was so he could maintain separate connections to different SQL instances and not be as confused as if he had all the connections in one running instance of SQL Server. It was, I suppose, an admirable attempt to ensure he was running the query against the right SQL instance, but it caused me to remember that color can be used for identifying the SQL instance.

Here is how it works. When SSMS is launched the user is presented with a connection dialog.

SSMS Connect to Server Dialog2

 

Next select Options and the below appears.

 

SSMS Connection Properties Dialog

Click on the Select button, choose a color and then click OK and then Connect in order to connect to the SQL instance. Now open a new query window. The color you just selected will appear along the bottom of the SSMS editor window. To do this after you have already opened SSMS, click Connect > Database Engine > Options from the Object Explorer pane. This type of color customization can also be done with registered servers. For more information on this feature click this link: https://msdn.microsoft.com/en-us/library/hh213617(v=sql.120).aspx

 

I hope these additional tips on SSMS prove useful. Feel free to leave me a comment or another customization that you like to use.

You Really Should Stay Current on Cumulative Updates

 

I have encountered a Log Shipping error that I want to detail as an encouragement for staying current on SQL Server cumulative updates. First, a little background. The environment is SQL Server 2014 Standard edition with a physical machine as the primary and two secondaries, both virtual machines backed by a flash storage system.   SQL Server is on service pack 1, cumulative update 1. Log shipping had been configured in this environment for many months, but SQL Server 2008R2 was updated to SQL Server 2014 some months prior. The Log Shipping Restore job leaves the database in the Standby Mode so it can be queried for reporting purposes.

 

One evening I received the following error via database mail: “SQL Server Alert System: ’Severity 021’ occurred on \\MySecondary. An error occurred during recovery, preventing the database ‘MyDatabase’ from restarting. Diagnose the recovery errors and fix them or restore from a known good backup.” This was immediately followed by another email advising that the Log Shipping restore job had failed. Yet a third email was sent for error 9004 severity 16, explaining that an error occurred processing the log for “MyDatabase.” The message informed that I could restore from backup or rebuild the log. I connected to the SQL instance and saw that the database was in the restoring mode.

 

A quick internet search turned up the following two links.

https://support.microsoft.com/en-us/kb/2987585

https://support.microsoft.com/en-us/kb/2015753

 

The symptoms section of the first link seemed to fit my situation fairly well. I was unfamiliar with the fsutil command line utility. When I executed it against my environment the results indicated that the Bytes per Sector value was 512 and Bytes per Physical Sector returned “Not Supported.” The support article gave me two workarounds for addressing the issue.

  • Move the transaction log file at the destination to a drive that has “Bytes per Physical Sector” set as 512 bytes.Note The Standby file can still be located on the drive that has “Bytes per Physical Sector” set as 4096 bytes.
  • Restore the log backups without using the standby option. Instead of the STANDBY option, use the WITH NORECOVERY option during the restore operation.

 

Since my database was still in the restoring mode and the second URL indicated I had a damaged transaction log, I could not offline the database and relocate the file per the first recommendation. I also could not confirm how the bytes per sector value on the drive was configured anyway. The first article indicates that the issue was fixed in SQL Server 2014 SP1 CU2 and SQL Server 2012 SP2, CU7. Remember, I mentioned that the environment was on CU1 of SQL Server 2014 SP1. Unfortunately, that put me one update behind with respect to being protected from this situation.

 

Looks like I had one good course of action – restore from backup. I disabled the restore job, restored from a recent full backup and the most recent differential. I left the database in the restoring mode so I could re-enable the Log Shipping restore job. This would get the secondary back to Standby mode.

 

If you want to know when the latest updates for SQL Server are available, then I recommend saving the links below in your favorite web browser.  As this post demonstrates, be sure to read what the latest are about as soon as possible to see if they apply in your situation.

 

http://sqlserverbuilds.blogspot.com/

http://sqlserverupdates.com/

http://sqlblog.com/blogs/aaron_bertrand/default.aspx

 

3 Reasons Not to Install Databases On the C Drive

We’re probably all familiar with the fact that SQL Server by default will install not only the program binaries but all your databases on the C drive as well.  Accidental DBA’s, i.e. System Administrators and Developers who inherited the maintenance of a database, may not be aware of how bad it can be if someone simply clicked the “Next” button all the way through the SQL Server install.  There are surely many reasons not to accept this default during the install, but I will start with three.

First, from a purely practical standpoint, I think installing to the default path makes backups and restore scripts unduly difficult.  Which is easier to remember, “C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup” or something simple like “D:\SQLBackups”? Which one would you rather have to type or copy from Windows Explorer after browsing the path?  With the second one it is so easy you don’t have to spend time browsing and copying it.  You just quickly type it out.  For the restore script would you rather type “C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Data\SomeDB.mdf” or “D:\SQLData\SomeDB.mdf”?  The same point applies here – Simplicity.  Why make it hard on yourself?

Second, if you don’t put your databases on the C drive then database growth can’t crash the entire computer by filling up the C drive.  We all know that databases grow and that growth has to be monitored and managed.  If the System Administrator misses those emails from Database Mail or her favorite monitoring tool, then she will be awakened first thing in the morning by a company end user who can’t do their regular work only to discover that the entire machine is experiencing paralysis.  If the database files are not on the C drive with the OS then at least database growth will not be the culprit of server paralysis from a full C drive.

Third, if you install all your databases to the C drive, then you are inviting performance issues.  Databases and the OS will be fighting for drive resources.  Disks only spin so fast.  If you have the OS and 10, 14, 50 or more database files all trying to read and write to the same drive, then your C drive is going to be spinning like a yo yo.  Not only is there contention between the OS and the database files, but the database files will be fighting each other for disk usage too.

Let me provide some real numbers from an environment I have seen.  I had a system with 12 databases all on the C drive.  That number includes 4 system databases and 8 user databases.  Latency was in the thousands for several of the databases.  I moved the user databases and TempDB off to another drive and left the System databases on the c drive.  I know I can move those too, but one step at a time.  See the comparisons below.

Database FileName Overall Latency Overall Latency Next Morning Overall Latency 3 Weeks Later
Master.mdf 8330 8287 4
Mastlog.ldf 2409 1607 0
Model.mdf 8129 No Change 3
Modellog.ldf 2881 No Change 1
msdbdata.mdf 32446 16031 5
msdblog.ldf 31270 22969 0
TempDB.mdf 254 Moved to Fusion IO card Moved to Fusion IO card
Templog.ldf 155 Moved to Fusion IO card Moved to Fusion IO card

 

In some cases there was an immediate impact for latency.  Clearly, removing the activity of the eight user databases made a huge difference in performance.  As you can imagine, the user databases performed much better from a latency perspective as well, especially since I put them on a Fusion IO card.

To see latency for your drives you can run the following query from Paul Randal:

 

/*
http://www.sqlskills.com/blogs/paul/how-to-examine-io-subsystem-latencies-from-within-sql-server/
*/

SELECT
[ReadLatency] =
CASE WHEN [num_of_reads] = 0
THEN 0 ELSE ([io_stall_read_ms] / [num_of_reads]) END,
[WriteLatency] =
CASE WHEN [num_of_writes] = 0
THEN 0 ELSE ([io_stall_write_ms] / [num_of_writes]) END,
[Latency] =
CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0)
THEN 0 ELSE ([io_stall] / ([num_of_reads] + [num_of_writes])) END,
[AvgBPerRead] =
CASE WHEN [num_of_reads] = 0
THEN 0 ELSE ([num_of_bytes_read] / [num_of_reads]) END,
[AvgBPerWrite] =
CASE WHEN [num_of_writes] = 0
THEN 0 ELSE ([num_of_bytes_written] / [num_of_writes]) END,
[AvgBPerTransfer] =
CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0)
THEN 0 ELSE
(([num_of_bytes_read] + [num_of_bytes_written]) /
([num_of_reads] + [num_of_writes])) END,
LEFT ([mf].[physical_name], 2) AS [Drive],
DB_NAME ([vfs].[database_id]) AS [DB],
[mf].[physical_name]
FROM
sys.dm_io_virtual_file_stats (NULL,NULL) AS [vfs]
JOIN sys.master_files AS [mf]
ON [vfs].[database_id] = [mf].[database_id]
AND [vfs].[file_id] = [mf].[file_id]
— WHERE [vfs].[file_id] = 2 — log files
— ORDER BY [Latency] DESC
— ORDER BY [ReadLatency] DESC
ORDER BY [WriteLatency] DESC;
GO

Well, I hope this has given you something to think about.  Your homework is to find out the latency in your environment, do some research, make changes as needed and then follow up to see if the changes you made helped.  Let me know how the homework turns out.