Consolidate Multiple Email Operators

Suppose you’ve been working on cleaning up SQL Agent jobs and their email operators and find that you have more than one operator with the same email address. How would you tidy this up to ease administration?

Let’s do a little setup to demonstrate the scenario.  The T-SQL below will create three operators with different names but the same email address.  Then it updates two of my existing test jobs to use those email operators when the job fails.  I am also logging failures to the Windows Event Log.

USE [msdb];
GO

EXEC msdb.dbo.sp_add_operator @name=N'DBA', 
 @enabled=1, 
 @pager_days=0, 
 @email_address=N'SuperHumanPerson@MyDomain.com'
GO

EXEC msdb.dbo.sp_add_operator @name=N'Super Human Person', 
 @enabled=1, 
 @pager_days=0, 
 @email_address=N'SuperHumanPerson@MyDomain.com'
GO

EXEC msdb.dbo.sp_add_operator @name=N'SQL Server Team', 
 @enabled=1, 
 @pager_days=0, 
 @email_address=N'TheDBADistroGroup@MyDomain.com'

Now I will run this simple query to find out what my email operators and email addresses are.

USE msdb;
GO

SELECT id
, name
, email_address
FROM sysoperators

Here is the result set.

How does this sort of thing happen, you ask.  Well, it doesn’t just happen in contrived blog posts!  Suppose there is a fair amount of turnover in a DBA environment and the DBA’s are not always checking what the last person did.  Each one after another goes in and does her own thing.  Voila!  You end up with duplicate email addresses for email operators.

Some jobs have “DBA” as the email operator.  Some have “Super Human Person” as the email operator.  You might even have a couple of “test” operators in their with “SuperHumanPerson@Mydomain.com”.

Maybe the DBA didn’t think about the fact that he might not always be the DBA and so it might be better to set up an email distribution group that was controlled by the Exchange person.  That would mean that as DBA’s came and went the same email operators could be used in SQL Server. Who knows! But I have seen this.  So what now?

sp_delete_operator to the rescue!

Here is our example.

EXEC sp_delete_operator @name = 'DBA', @reassign_to_operator = 'SQL Server Team';
GO
EXEC sp_delete_operator @name = 'Super Human Person', @reassign_to_operator = 'SQL Server Team';

So what this does is it deletes the email operator in the @name parameter and assigns all jobs that had the @name operator to the operator assigned to the @reassign_to_operator parameter.  Now when you run the second T-SQL snippet from above against the sysoperators table in msdb, you get back just the operator name “SQL Server Team.”

 

 

 

 

Finding Failed Jobs

How would you go about finding jobs that have failed and why would you want to look for these?  You might want to do this to ensure that you are being notified appropriately of job failures. For instance, after reviewing failed jobs, you may realize that there are job failures that no one knew about. Maybe this is occurring because the jobs are not set to send email. If that’s the case see my post here so you can fix that.

But how do you find these job failures in the first place?

1. Manually look at the job activity monitor for the SQL instance.

The job activity monitor is found under the SQL Server Agent node in the Object Explorer of SSMS.

 

Just double click it to open and you will see something like this. In this case I set up a job with an incomplete SELECT statement so that it would create a failure.


2. Query the msdb database.

SELECT
J.Name AS JobName
, J.description AS JobDescription
, H.step_id
, H.step_name
, msdb.dbo.agent_datetime(run_date, run_time) as 'RunDateTime'
, H.sql_severity
, H.message
FROM sysjobs AS J
INNER JOIN sysjobhistory AS H ON J.job_id = H.job_id
INNER JOIN sysjobsteps AS JS ON J.job_id = JS.job_id AND H.step_id = JS.step_id
WHERE H.run_status = 0 --Failed jobs

 

Fixing Jobs With No Email Notification

In the last post we found jobs that had no email notification. Once you find them, how do you fix this problem?

I’m going to assume that database mail is already enabled and configured on the SQL Server instance.
1. Create an operator, assuming one does not exist.

This can be done either through T-SQL or through the GUI.  In the GUI, right click the Operator folder under SQL Server Agent and chooser “New Operator.”  Then fill in the details of operator “Name “and “Email name.”  See below for an example.

 

Then you click OK in the bottom right of the GUI to finish the process.

 

Alternatively you can use T-SQL to create the Operator:

USE [msdb]
GO
EXEC msdb.dbo.sp_add_operator @name=N'DBA Team', 
    @enabled=1, 
    @pager_days=0, 
    @email_address=N'MyEmail@MyDomain.com'
GO

2. Assign that operator to multiple jobs via T-SQL.

Once there is an Operator to assign to jobs with no email notifications, the next step is to actually add the new Operator to the applicable SQL Agent jobs.  This will create the T-SQL that can be copied and ran in a new SSMS window, allowing you to confirm before running the statements.

Keep in mind that in this example I am only updating jobs that don’t already have an operator assigned to them.  This makes the WHERE clause very important.  If you leave this off, copy and run the generated statements, then all jobs will be updated to whatever was specified in the variable.  This may not be what you want for your environment, so please be sure to use the WHERE clause and choose notify_level_email = 0 if you only want to update jobs that have no operator assigned to them.

 

use msdb
GO
DECLARE @operator varchar(50)
SET @operator = 'DBA Team' 

SELECT 'EXEC msdb.dbo.sp_update_job @job_ID = ''' + convert(varchar(50),J.job_id) + ''' ,@notify_level_eventlog = 2, @notify_level_email = 2, @notify_email_operator_name = ''' + @operator + '''' 
FROM sysjobs As J
 INNER JOIN sysjobschedules AS JS ON J.job_id = JS.job_id
 INNER JOIN sysschedules AS SS ON SS.schedule_id = JS.schedule_id
 
WHERE J.enabled = 1 --Job is enabled
 AND J.notify_level_email = 0 --never notify
 AND SS.enabled = 1; --it has a schedule that is enabled

Notice that I am using sp_update_job not only to add the email operator, but also to set the job to log to the Windows event log in case of a failure.  Why?  Well, what happens if database mail is not configured properly?  Maybe it worked when the job was created, or maybe it worked the last time you touched database mail, but what if someone else made changes to it?  In that case, you might not get the expected email when the job fails.

If you have someone, like a Junior DBA, who is looking through Windows Event logs for issues they might find it.  Or what if you have one of those slick 3rd party monitoring that scans for errors and notifies you when it finds one?  Then you have another way to be notified that the job failed.

And here is your BOL link to all the goodies involved with sp_update_job.

Finding Jobs With No Email Notifications

When you take over a new SQL Server instance one of the things you might consider doing early on is finding SQL Agent jobs that have no email notification set up.  I am of the opinion that if a job is worth creating, then it’s worth telling someone when it doesn’t do what it is supposed to do.

A quick, easy way to determine jobs that have no email operator is to connect to your SQL Server instance and run the code below.

 

USE msdb;
GO

SELECT J.name,
 J.notify_level_email,
 J.notify_email_operator_id

FROM sysjobs AS J
 INNER JOIN sysjobschedules AS JS ON J.job_id = JS.job_id
 INNER JOIN sysschedules AS SS ON SS.schedule_id = JS.schedule_id
 
WHERE J.enabled = 1
 AND J.notify_level_email = 0 --never notify
 AND SS.enabled = 1; --it has a schedule that is enabled

For jobs that are not going to email anyone if they fail, the last two columns will return a zero.  These are jobs that you will want to focus on and fix so that when they fail, someone can fix the problem.

Here is a link on all the goods available to you from dbo.sysjobs:

https://msdn.microsoft.com/en-us/library/ms189817.aspx

 

Steps to Catch Up Log Shipping Secondary

 

If you are using Log Shipping, there have probably been times that you have discovered that the Log Shipping Secondary is not up to date.  Perhaps you found out by right clicking on your SQL Server instance and running the performance dashboard report for Log Shipping Status.  If you are not familiar with this report check here

Perhaps there is a Reporting team that uses that secondary and they met you at your desk upon arrival, informing you that the database is not online so they can’t work yet. So what do you do when you realize that the secondary is so far behind that it is probably not going to catch up on its own?  This process assumes that you have a Log Shipping config set up to run a Log Shipping Restore job just prior to the beginning of the business day and that job leaves the database in Standby mode so it can be read from.  At the end of the business day the log shipping restore job has one or more schedules set to  kick off restores for the transaction logs during the business day.

Here is a process you can use. 

1.       Disable the Copy Job on the secondary that is behind. 

2.       View the previously mentioned Log Shipping Status report and determine which file is the next in line to be restored. 

3.       Create a folder in the path where the transaction log backups are being copied to and perform a Cut operation on all the files just after the current file being restored. 

4.       Paste those files into the folder you made in step 3. 

Here is what you have accomplished with this.  You have set up the database to go to Read Only/Standby once the current file is done restoring.  It can’t continue the restore process because it has reached the last transaction log file in the original folder path. If the Reporting team can live with the data as is, then they can get to work while you set up the next part that will allow you to catch it up.  If it still is not usable for the Reporting team because the data has to be more current, then you have still helped yourself because you have to get the database out of recovery mode so you can do the next part to catch it up.

5.       Take a differential backup of the current database. 

6.       While that is running, open SQL Server Management Studio to set up your script to restore the most recent full backup and the differential you are making.  Remember to consider that you may have to use the WITH MOVE part of the RESTORE database syntax because the location of your files on the secondary may not be the same as on your primary.  Also, if your Log Shipping config is designed to leave the database in Standby then you will need to use the WITH STANDBY clause and then name the .tuf file where the redo information is kept.

7.       If the Reporting team started querying after step 3 you will need to ask them to disconnect because you can’t restore a database while other sessions are connected.  Alternatively you could use ALTER MYDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE at the beginning of your RESTORE script but that would be inconsiderate.

8.       Once the secondary is done restoring the most recent transaction log it was on, start the restore scripts you built in step 6. Once this step is done, you can inform the Reporting team that the database is ready to be queried.

9.       Cut and paste the files from steps 3 and 4 so they are back in the target folder for the copy job. 

10.   Re-enable the Copy job. Once step 8 is done, you can inform the Reporting team that the database is ready to be queried. 

Three Reasons to Use a Custom Database Role

Ever had a situation where a small in-house application written for a small group of employees needed some permissions in the database? How do you handle that? I want to walk through the scenario and provide three reasons that using a Database Role might be the answer you are looking for.

Let’s first cover, at a high level, the different levels of permissions in SQL Server and the associated commands. In SQL Server there are Server level permissions for the SQL instance and there are database level permissions. Within the database level there are also specific permissions for objects like tables, views and store procedures. There are also database permissions that can be grouped together in a database role. In fact, there are some built-in database roles for SQL Server as shown below. If you aren’t familiar with these, you can find out about them here.

Database Roles

 

As an example, suppose there is an application being written for a small department in your company. The application needs access to your company database, but not access to everything in the database. Perhaps the app only touches a few tables. Someone could grant the Server level permission of SysAdmin to a Windows account or Windows group that contains the necessary people added to it in Active Directory. That would be a “sledge hammer” approach to providing permissions and break the principle of least privilege. In this scenario, a database role could be used to accomplish the task at hand.

On your domain you or the System Administrator can create a group called MyDomain\DepartmentName. With the pseudo-code below as a model, you can create the Login and User.

USE master;
GO
CREATE LOGIN [MyDomain\DepartmentName] FOR WINDOWS
USE MyDatabase;
GO
CREATE USER [MyDomain\ DepartmentName] FOR LOGIN [MyDomain\ DepartmentName]

In this new AD group, you place the members of the department. This allows the department members the ability to use their already familiar Windows passwords, keeping it easier for them. Using Windows groups also makes it easier for you. If you aren’t convinced of that read this post.

At this point is when some people might still take the approach of giving big permissions by mapping the created database user to the db_owner role in the database and be done with the process of permissions. However, do you really want a group of people in a department to have complete control over the entire database. Somehow I doubt it, and I doubt your auditors want to see that either.

Permissions could be granted to the user account itself inside SQL Server, but what if other people who are not yet in the appropriate AD group want access? Now you either have to add them to that AD group, which might now make sense from an organizational perspective, or you have to start this process all over again with a new login and user. This is where Database Roles come in. Below is example code on how you would do that based on the process we already have in place.

USE MyDB;
GO
CREATE ROLE MyNewDBRole;
GO
ALTER ROLE ADD MEMBER [MyDomain\ DepartmentName];
GO
GRANT EXECUTE ON [dbo].[GetEmployees] TO MyNewDBRole;
GO
GRANT EXECUTE ON [dbo].[InsertEmployee] TO MyNewDBRole;
GO
GRANT EXECUTE ON [dbo].[DeleteEmployee] TO MyNewDBRole;
GO
GRANT EXECUTE ON [dbo].UpdateEmployee] TO MyNewDBRole;
GO
GRANT EXECUTE ON [dbo].[GetSales] TO MyNewDBRole;

 

So what are some reasons to use a process like this? First, the application will follow the principle of least privileges. This means that the application and the users of the application will only have the permissions required for the functionality of the application. This keeps your SQL Server more secure, which makes auditors happy.

Second, this type of process eases administration. When the application is expanded and more people begin to use it and for different purposes, just create a group like the first one demonstrated and add the group to the role. Grant any additional permissions needed to the role. No adding individual logins to the SQL Server and no applying permissions to many individual logins, which would be a pain. Now you have a single point of permissions administration within SQL Server – the Database Role. Additionally, if there are new employees in the department who need access to the application, simply add those people to the AD group and you’re done!

Third, you and the System Administrator get some face time!! Who doesn’t like a little collaboration?

Retiring sp_who and sp_who2

With the advent of SQL Server 2005 came along new objects called Dynamic Management Views, also known as DMVs. One of the great places where improvements were made by adding the DMVs is when investigating what is happening on a SQL Server at the moment production troubleshooting is required. When things go south and the DBA is contacted the new DMV’s are invaluable. Despite this, there are still some folks out there using sp_who and sp_who2. With this post I want to demonstrate why you should stop using sp_who and sp_who2.

To demonstrate this, we need a blocking transaction. I connected to AdventureWorks2014 on my local instance and in one query window I have the first query and in a second window the second query. Execute these queries and blocking will occur.

 

    BEGIN TRANSACTION
      UPDATE [Person].[Person]
      SET FirstName = 'Ken'
      WHERE BusinessEntityID = 1
    --COMMIT TRAN

      BEGIN TRAN
      UPDATE [Person].[Person]
      SET FirstName ='Jason'
      WHERE BusinessEntityID = 1
      --COMMIT TRAN

Now that you have a blocking transaction, open a third window in SQL Server Management Studio (hereafter SSMS) and run the below.

 

sp_who;
GO
sp_who2;
GO

SELECT 
  ES.session_id AS BlockedSession
, BlockedSQL.text AS BlockedSQL --Not in sp_who or sp_who2
, ES.login_name
, ES.program_name
, ES.host_name
, ER.blocking_session_id
, BlockingSQL.text AS BlockingSQLText
, ER.cpu_time
, ER.logical_reads  --Not in sp_who or sp_who2
, ER.reads --Not in sp_who or sp_who2
, ER.writes --Not in sp_who or sp_who2
, ER.lock_timeout  --Not in sp_who or sp_who2 
--Lock time-out period in milliseconds for this request
, ER.row_count --Not in sp_who or sp_who2 
/*Number of rows that have been returned to the client by this request.
*/
, ER.total_elapsed_time --Not in sp_who or sp_who2
, CASE CAST(ER.transaction_isolation_level AS VARCHAR(20)) 
  WHEN '0' THEN 'Unspecified'
  WHEN '1' THEN 'ReadUncomitted'
  WHEN '2' THEN 'ReadCommitted'
  WHEN '3' THEN 'Repeatable'
  WHEN '4' THEN 'Serializable'
  WHEN '5' THEN 'Snapshot'
  END AS transaction_isolation_level --Not in sp_who or sp_who2
, ER.wait_type --if currently blocked this will have a value
, ER.wait_time  --Not in sp_who or sp_who2
, ER.last_wait_type --Not in sp_who or sp_who2 
/*If session was previously blocked, this will show what the wait type was.
*/
, ER.wait_resource --Not in sp_who or sp_who2.  
/*Can use wait_resource and expand this query to use other DMVs to find what table and index is being waited on, for example.  
*/
, ER.deadlock_priority --Not in sp_who or sp_who2
, DB_name(ER.database_id) AS DBName  
, ER.executing_managed_code  
/*Not in sp_who or sp_who2 --Indicates whether a specific request is currently executing common language runtime objects, such as routines, types, and triggers. It is set for the full time a common language runtime object is on the stack, even while running Transact-SQL from within common language runtime
*/
, EC.client_net_address AS ClientIPAddress 
--Not in sp_who or sp_who2

FROM sys.dm_exec_sessions AS ES
INNER JOIN sys.dm_exec_requests AS ER ON ES.session_id = ER.session_id
INNER JOIN sys.dm_exec_connections AS EC ON ER.blocking_session_id = EC.session_id
CROSS APPLY sys.dm_exec_sql_text(ER.plan_handle) AS BlockedSQL
CROSS APPLY sys.dm_exec_sql_text(EC.most_recent_sql_handle) AS BlockingSQL

From sp_who and sp_who2 you should get an output similar to below where blocking information is provided related to SPIDs 52 and 53.

Click to enlarge.

sp_who and sp_who2

The output from the query of the DMVs looks like the below.  The results have far more columns than van be displayed in a screen capture.

Click to enlarge.

Blocked and Blocking SQL

Please note that there is very little difference between the data in sp_who and sp_who2. However, there is a big difference between those older object calls and the third script. Out of the myriad of columns returned by the third script, very few are returned by sp_who or sp_who2.

Be sure and go back to SSMS and commit the two queries involved in the blocking to clean up the demo.

Some items of interest in the third script that are not in the older sp_who or sp_who2 are:

  1. The SQL statement being ran by both the blocked and blocking process.
  2. Number of logical (in memory) reads
  3. Break down of reads versus writes
  4. Whether or not the requests are executing in CLR code
  5. Wait Resource: This shows the DBA things like whether there is a KEY or Page wait occurring and where.
  6. Isolation level of the statements involved
  7. IP Address of the client involved

By using the above third query the SQL statements of both the blocked and blocking session can be retrieved without the extra step of running DBCC INPUTBUFFER as would be needed when using sp_who or sp_who2. This will save the DBA valuable time when troubleshooting a live issue.

The DMV’s are just so much more powerful than calls to sp_who and sp_who2 that are dependent on sys.processes, which has been deprecated for some time now. I would strongly encourage experimentation with the new DMV’s. They will be a valuable tool in the DBA toolbox.

Three Places to Get Information On New Updates for SQL Server

There are many consistently hot topics around SQL Server. One such topic is, “How does a DBA obtain the latest information on updates?” In this post, I will discuss three places that I frequent for information on SQL Server updates.

The first link I want to discuss is http://sqlserverbuilds.blogspot.com/. This site gives you a very comprehensive page of data. Initially there will be a summary grid that lists a SQL Server release down the left side and across the top will be release names like RTM, SP1, SP2, etc. Inside the grid will be the corresponding release number for a particular release and that release number will be a link to the corresponding download.

Below that summary grid will be similar grids for individual releases. The difference is that in the grids for individual releases the reader will find links to hotfixes, cumulative updates and service packs. There is a grid for each release go back to SQL Server 7.0 and there are even links to 6.5 and 6.0 builds! This site does not have an RSS subscription option.

The next link is http://sqlserverupdates.com/ and is maintained by the people of BrentOzar.com. If you are not familiar with them, they are a consulting company started by Brent Ozar, Kendra Little and Jeremiah Peschka. Kendra and Jeremiah recently departed the company to pursue other interests, turning the reigns over solely to Brent Ozar.

This site goes back to SQL Server 2005. The matrix provided shows the reader the SQL Server version, then the latest updates and when support ends for that version. There is a separate link for updates other than the latest one for each edition. Clicking the link for Other SQL 2014 Updates, for example, opens a separate page that shows much the same information as the edition grids on http://sqlserverbuilds.blogspot.com/. On this site there is also a FAQ that addresses things like what is the difference between a hotfix, cumulative update and a service pack. This site does have an option to subscribe to notifications about new updates for SQL Server.

The last link is http://blogs.msdn.com/b/sqlreleaseservices/. It is written by the SQL Server Engineering Team. Rather than being organized in a grid style, this site lists information chronologically as posts are written and published. As a result, there is a mixture of posts about the various edition releases all on the same page as separate blog posts. Readers can subscribe with Live Bookmarks.

If you think you don’t need to concern yourself with keeping up to date on SQL Server patches, then please take a look at this post.

I hope this has been helpful and I wish you happy patching!

 

Moving Database Files to Another Drive

You may occasionally have the need to relocate database files. What are some reasons that may prompt the need for this type of change? First, someone might have discovered that the database files are on the same drive as the operating system. For a discussion of why you don’t want to do this see this post. Second, your database might simply be outgrowing its current drive. Third, perhaps someone installed the transaction log file on the same drive as the mdf file.

So, after the need to do this has arisen, how is the task actually done? The first step, which may have already been done as part of a process that discovered the issue in the first place, is to confirm the current location of your database files. There are at least two ways to approach this. First, if you want to look at all the information on the SQL Server instance then use the below query that utilizes sys.masterfiles.

SELECT

MF.name AS LogicalFileName

, physical_name AS FilePath

FROM sys.master_files AS MF

A second way to do this is to review the information for a single database. For that, the query below can be used. Note I am looking for AdventureWorks2008R2 in my particular query.

SELECT
D.name AS DBName
, MF.name AS LogicalFileName
, MF.physical_name AS FilePath

FROM sys.master_files AS MF
INNER JOIN sys.databases AS D ON MF.database_id = D.database_id
WHERE D.name = 'AdventureWorks2008R2'

 

Click to enlarge.

FilesNames and Paths

Once this is complete the next step is to verify that the target drive for the new file is large enough to hold the drive that you plan on moving. You really don’t want to get in the middle of this change only to discover that the target drive does not have enough room to hold the file you want to move.  In this case I will be demonstrating how to move a transaction log file and so I will use a DBCC command to return space information.

DBCC SQLPERF(LOGSPACE)

 

DBCC SQLPERF LOGSPACE

 

The DBCC command gives you the current size of the log in megabytes, as well as other useful information like the percentage of space used in that log file. In this case my transaction log file is very small at 3.36 MB for the AdventureWorks2008R2 database.

Another way to see this information for a specific database would be to right click the database name in SQL Server Management Studio and select Tasks > Shrink > Files. Switch the display to the Log file and the dialog will show the currently allocated space for the transaction log file as well as the free space in that file at the moment.

Once you have confirmed that the target drive will hold the file you want to use the next step will be to take the database offline so that the file you want to move can be copied to the target drive. Use the script below to offline your database, replacing AdventureWorks2008R2 with the name of your database.

ALTER DATABASE AdventureWorks2008R2 SET OFFLINE WITH ROLLBACK IMMEDIATE;

 

Now using Windows Explorer, copy the file from its current location to the new location and run the below query. for this demonstration I simply made a new folder called Logs in the default path like this: ‘C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Logs

ALTER DATABASE AdventureWorks2008R2

MODIFY FILE (NAME = AdventureWorks2008R2_log, FileName = 'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Logs\AdventureWorks2008R2_log.ldf');

Then bring the database online.

ALTER DATABASE AdventureWorks2008R2 SET ONLINE;

That’s it. The files have been successfully moved. Congratulations! Performance has been improved or a disaster with a full drive has been averted.

Setting Min and Max Server Memory Setting

Configuring the min and max server memory settings for SQL Server has a number of things to consider prior to making changes. Before we jump into those considerations, let’s learn what the min and max server memory options do.

 

MIN SERVER MEMORY

The min server memory option specifies the minimum amount of memory utilized by the SQL Server buffer pool. It may seem that this should mean that when SQL Server starts it will automatically grab the memory equal to the min server memory value. However, this is not what happens. Instead, SQL Server acquires system memory as needed to process a workload. Consequently, if the SQL Server workload remains low, it is possible that the min server memory value is never reached. However, once it is reached SQL Server will not release an amount of memory that would take it below that minimum setting.

For example, let’s suppose your server has 128 GB of RAM and the min server memory option were set to 32 GB for SQL Server. SQL Server does not immediately allocate 32 GB of RAM upon serve startup. It will acquire memory as needed for the workload. This means that if there is little activity in the database it may take some time before SQL Server allocates 32 GB of RAM. Once this happens, SQL Server will not release memory that would reduce the allocation below the min serve memory setting.

 

MAX SERVER MEMORY

To a certain point the max server memory setting does do what it sounds like. Keep in mind this setting is applicable only to the buffer pool. SQL Server can allocate some memory beyond this max number as it consumes memory for things outside of the buffer pool. Also, realize that this setting does not account for memory allocated to other components of SQL Server that may be installed and running, such as Reporting Services or Analysis Services.

 

The default value for Max Server Memory is 2,147,483,647 in MB. That number is rather large, 2,047.99 GB to be exact. So what happens if the machine SQL Server is installed on does not have that much memory and the default was not changed? As previously mentioned, SQL Server will continue to acquire additional memory based on workload. As a result, SQL Server could make requests of Windows that puts the box under memory pressure as it essentially attempts to choke the OS by leaving less and less memory for the OS. There are mechanisms in the OS that allow it to page out to disk some information in the SQL Server buffer pool so Windows can recover that memory for itself. This hurts SQL Server performance because disk IO is far slower than keeping operations in memory. Among other places, details on how this process works can be found here, which is a blog post discussing the Lock Pages In Memory feature. I will save that setting for discussion a later post.

 

The point I want to make here is that you can calculate how much memory the Windows OS and other applications may need and then subtract that from the physical memory available. There are recommendations on the internet for such things. Whatever number you arrive at would then be set as the Max Server Memory setting in SQL Server. The Available MB perfmon counter should then be judiciously monitored to ensure that there is adequate free memory left on the box, at least 1-2 GB. That will allow plenty of memory for that person who just insists on using an RDP connection to do his or her work or for a new process to fire off without bringing the server to its knees.

 

For example, let’s say that your server has 128 GB of RAM. You may decide to allocate 12 GB for the OS. Perhaps you have a vendor app that needs 4 GB. You take 128-(12+4) and that leaves 112 GB of RAM. If you are running SQL Server 2008 R2 then the Max Server Memory setting can be set to a max value of 64 GB. That is per SQL instance. SQL Server 2012 Standard supports a maximum of 64 GB and SQL Server 2014 Standard supports 128 GB. If SQL Server 2014 Standard were installed in this scenario there would be a maximum of 112 GB that could potentially be allocated to the SQL Server buffer pool for the machine described above.   Setting this limit at 112 GB, followed by the aforementioned monitoring of the Available MB performance monitor counter, would help ensure that SQL Server does not hog memory from the Windows operating system.

 

Now that there is a foundational understanding of what the Min Server Memory and Max Server memory settings are and how they work, let’s look at how to set these options in SQL Server.

 

First, there is the GUI. Open SQL Server Management Studio and connect to a SQL Server instance. Right click on the SQL Server instance name and then choose Properties. Now select the Memory page in the left pane  Manually change the values in the Minimum and Max Server Memory boxes.  Remember that this is in MB.  Then click OK in the lower right of the dialog box.  Also, remember that when you make changes in SSMS you can click the Script button at the top of the dialog and T-SQL for the particular changes will be generated for you.

 

Min and Max Server Settings

 

Second, these settings can be changed using T-SQL.  Here is an example script that sets the minimum to 12 GB and the Max to 32 GB.

 

EXEC sys.sp_configure N’show advanced options’, N’1′ RECONFIGURE WITH OVERRIDE

GO

EXEC sys.sp_configure N’min server memory (MB)’, N’12288′

GO

EXEC sys.sp_configure N’max server memory (MB)’, N’32768′

GO

RECONFIGURE WITH OVERRIDE

GO

EXEC sys.sp_configure N’show advanced options’, N’0′ RECONFIGURE WITH OVERRIDE

GO

 

I encourage you to review these settings on your SQL Server instance and consider whether they need to be adjusted.