How to Use the SQL Server CONCAT Function

SQL Server concatenation methods have been enhanced in modern versions of SQL Server. SQL Server 2012 introduced the CONCAT() function. In SQL Server 2017 we get CONCAT_WS().

A common usage of concatenation, or joining column values together in a string, is combining a FirstName and LastName column into a FullName column.  Another common usage might be for creating an address column that pulls together building number, street, city and zip code.

Below I have provided an example of the three common ways to concatenate strings to create an address. These all produce the same results because there are no NULL values. They are just different ways of doing the same thing.

The first example is the CONCAT() method from SQL Server 2012, followed by the CONCAT_WS() function, which is CONCAT With Separator. The third method is the traditional method of combining strings with the
+” sign.

USE WideWorldImporters;
GO

SELECT CONCAT(DeliveryAddressLine2, ' ', C.CityName, ' ',DeliveryPostalCode) AS NewConcatMethod,

CONCAT_WS(' ', DeliveryAddressLine2, C.CityName,DeliveryPostalCode) AS CONCAT_WSMethod,

(DeliveryAddressLine2 + ' ' + C.CityName + ' ' + DeliveryPostalCode) AS OlderConcatMethod

FROM [WideWorldImporters].[Sales].[Customers] AS CUST
INNER JOIN [Application].[Cities] AS C ON CUST.DeliveryCityID = C.CityID;

 

NULLs And Concatenation

Before SQL Server 2012 the “+” sign was used to combine strings together into whatever form you needed. One disadvantage to this method is that if a value in the string to be concactenated is NULL then the result set returns a NULL.

 

USE AdventureWorks2014;
GO

SELECT TOP 5
    Title, 
    FirstName, 
    MiddleName, 
    LastName,
    Title + ' ' + FirstName + ' ' + MiddleName + ' ' + LastName as MailingName
FROM Person.Person;

SELECT TOP 5
    Title, 
    FirstName, 
    MiddleName, 
    LastName,
    CONCAT(Title,' ',FirstName,' ',MiddleName,' ', LastName) as MailingName
FROM Person.Person;

 

In the top result set, you can see that whenever a NULL occurs the resulting MailingName column is also NULL. This makes some sense because NULL is “unknown” so you can’t really join an unknown value to other strings and get a predictable result.

However, with CONCAT(), columns with NULL values simply have those values ignored.

SQL Server CONCAT Function and NULL

 

What about concatenating numbers?

CREATE DATABASE TestDB;

CREATE TABLE IntValues
(Col1 INT);

INSERT INTO IntValues
VALUES(67),(45),(90)

SELECT CONCAT(67,45,90) AS Result
FROM dbo.IntAvlues;

SELECT CAST(67 as CHAR(2)) + CAST(45 as CHAR(2)) + CAST(90 as CHAR(2)) AS Result
FROM dbo.IntAvlues; 

Concatenating numbers works in the same way. The great thing here is that there is no need to do an explicit cast when using CONCAT().

 

SQL Server CONCAT_WS()

SQL Server 2017 brought us CONCAT_WS. With this new function, you specify the separator you want to use when building the string.  The value used as the separator goes first inside the parentheses followed by the column names that are to be joined together. These column names are separated by commas in a list.  Like CONCAT(), CONCAT_WS() eliminates NULLS from the result set, leaving you with the non-null values.

SELECT TOP 5 Title, FirstName, MiddleName, LastName, 
CONCAT_WS(' ',Title,FirstName,MiddleName,LastName) as MailingName 
FROM Person.Person;
SQL Server Concat_WS() Results

I think that the syntax with these new functions is easier to write and to read. These newer approaches reduce typing and having to switch between adding a “+” and ‘ ‘ in various places.  CONCAT_WS() reduces typing even more by only specifying the separator once at the beginning of the function.

Next Steps To Take

  1. Check here for the MS Docs on CONCAT() and here for CONCAT_WS().
  2. If you would like help with anything in this post, or with something else related to SQL Server, reach out to me here, or on Twitter, and I’ll be glad to offer assistance.

How to Configure SQL Server Temporal Tables Part 2

 

In the initial post, we talked about some very basic items to get you up and running with temporal tables. In part 2, I’ll cover a few more things to think about and implement when working with temporal tables.

In the first post, we looked at altering an existing table to be a temporal table. Now we will look at what is involved in making a table be a temporal table from the very beginning.

Designing a Temporal Table from Scratch

First of all, you can create a table to be a temporal table from the beginning, without naming the history table. But you get an ugly history table name in the following format:

[dbo].[MSSQL_TemporalHistoryFor_tableObjectId].

In my test case that turns out to be this:

[dbo].[MSSQL_TemporalHistoryFor_1913773875].

Also, notice that in this example below, the SysStartTime and SysEndTime required columns are not designated as hidden. This means they can be returned in queries, so be aware of this, particularly when using SELECT *.

This syntax also results in a clustered index being created on the history table
using the SysStartTime and SysEndTime columns.

Observe that I’m taking advantage of the inline index creation syntax that was new in SQL Server 2014. Look at examples S, T, and U to see more information on how to use this feature.

CREATE TABLE dbo.Employee
(
ID INT IDENTITY(1,1) NOT NULL,
LastName VARCHAR(100) NOT NULL,
FirstName VARCHAR(75) NOT NULL,
JobTitle VARCHAR(50) NOT NULL,
BirthDate DATE NOT NULL,
HireDate DATE NOT NULL,
SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL,
SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL,
 PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime),
CONSTRAINT PK_Employees PRIMARY KEY CLUSTERED (ID),
INDEX IX_LastName_FirstName (LastName,FirstName),

)
WITH (SYSTEM_VERSIONING = ON);

ALTER TABLE dbo.Employee ADD CONSTRAINT DF_SysStart DEFAULT SYSUTCDATETIME() FOR SysStartTime;
ALTER TABLE dbo.Employee ADD CONSTRAINT DF_SysEndTime DEFAULT SYSUTCDATETIME() FOR SysEndTime;

This allows me to create the index on Lastname, FirstName directly in the CREATE TABLE statement and not have to do that afterward in a lengthier syntax.

INDEX IX_LastName_FirstName (LastName,FirstName)

To clean up this example and try a slightly different approach, run this code:

ALTER TABLE dbo.Employee SET(SYSTEM_VERSIONING = OFF)
DROP TABLE dbo.Employee

 

How To Name the History Table

You can create a temporal table and name the history table so it is more human friendly. This syntax also results in a clustered index being created on the history table using the SysStartTime and SysEndTime columns.

If you want to be able to create a clustered columnstore index, or a different clustered index then there are 2 choices:
1. Use this syntax and then drop the clustered index and make the new clustered index.
2. Create both the temporal table and the history table and when you make the history table, and specify the indexes you want to create.

 

CREATE TABLE dbo.Employee
(
ID INT IDENTITY(1,1) NOT NULL,
LastName VARCHAR(100) NOT NULL,
FirstName VARCHAR(75) NOT NULL,
JobTitle VARCHAR(50) NOT NULL,
BirthDate DATE NOT NULL,
HireDate DATE NOT NULL,
SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL,
SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL,
 PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime),
CONSTRAINT PK_Employees PRIMARY KEY CLUSTERED (ID),
INDEX IX_LastName_FirstName (LastName,FirstName),

)
WITH (SYSTEM_VERSIONING = ON(HISTORY_TABLE = dbo.EmployeeHistory));

ALTER TABLE dbo.Employee ADD CONSTRAINT DF_SysStart DEFAULT SYSUTCDATETIME() FOR SysStartTime;
ALTER TABLE dbo.Employee ADD CONSTRAINT DF_SysEndTime DEFAULT SYSUTCDATETIME() FOR SysEndTime;

DROP INDEX IX_EmployeeHistory ON dbo.Employeehistory;

CREATE CLUSTERED COLUMNSTORE INDEX IX_CC ON dbo.EmployeeHistory;

 

Indexing Decisions for Temporal Tables

Per MS Docs, if you are creating this table as a temporal table for the purpose of auditing row changes, then a clustered rowstore index on the  SysStartTime, SysEndtime and the PK columns of the temporal table is a good choice. Otherwise, a clustered columnstore index is a good choice for analytic queries.

How to Handle Data Retention for Temporal Tables

The simplest option for managing data retention on a history table is to use
the HISTORY_RETENTION_PERIOD = syntax. The available time units for history retention are DAYS, WEEKS, MONTHS, YEARS. SQL Server then uses this information to manage the deletion of data from the history table for you. notice below I have added HISTORY_RETENTION_PERIOD = 6 MONTHS to the syntax.

CREATE TABLE dbo.Employee
(
ID INT IDENTITY(1,1) NOT NULL,
LastName VARCHAR(100) NOT NULL,
FirstName VARCHAR(75) NOT NULL,
JobTitle VARCHAR(50) NOT NULL,
BirthDate DATE NOT NULL,
HireDate DATE NOT NULL,
SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL,
SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL,
 PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime),
CONSTRAINT PK_Employees PRIMARY KEY CLUSTERED (ID),
INDEX IX_LastName_FirstName (LastName,FirstName),

)
WITH (SYSTEM_VERSIONING = ON
    (HISTORY_TABLE = dbo.EmployeeHistory,
     HISTORY_RETENTION_PERIOD = 6 MONTHS)
    );

ALTER TABLE dbo.Employee ADD CONSTRAINT DF_SysStart DEFAULT SYSUTCDATETIME() FOR SysStartTime;
ALTER TABLE dbo.Employee ADD CONSTRAINT DF_SysEndTime DEFAULT SYSUTCDATETIME() FOR SysEndTime;

Other options for handling data retention include stretch tables, which place the entire history table, or part of it, in Azure. Partitioning the history table an writing your own custom cleanup script are also options.

Next Steps To Take

  1. Look here for a list of other considerations and limitations.
  2. Think about the tables in your environment where this feature could be useful and do some experimenting.
  3. Talk to your business personnel about this feature and ask them if they know of tables where knowing how the data has changed over time would be helpful. In some cases, you might find that the business is sometimes asked questions about how or when their data changed. This feature is perfect for answering that question.
  4. If you would like help with anything in this post, or with something else related to SQL Server, reach out to me here, or on Twitter, and I’ll be glad to offer assistance.

How to Configure SQL Server Temporal Tables

Temporal tables were a new feature in SQL Server 2016. They allow for the tracking of changes without having to set up change tracking, change data capture or triggers.  for certain uses they are not a replacement for something like change data capture, but would work if the purpose is simply having a record of changes. Temporal tables can also be used for auditing changes to data as well as  point-in-time analysis where users might want to compare data changes over time, or even at a specific point in time.

You can create a new table as a temporal table from the very beginning. However, I think a more common scenario is going to be converting existing tables to be temporal tables so I will start there.

The Initial setup

I’ll make a backup of the Production.Products table. I do this as a general practice to keep from having to restore the database to return things to normal after demos and experimentation. This way, I can drop the backup of the table and move on.

USE AdventureWorks2016CTP3;
GO

DROP TABLE IF EXISTS Production.ProductBackup
SELECT TOP (1000) [ProductID]
      ,[Name]
      ,[ProductNumber]
      ,[MakeFlag]
      ,[FinishedGoodsFlag]
      ,[Color]
      ,[SafetyStockLevel]
      ,[ReorderPoint]
      ,[StandardCost]
      ,[ListPrice]
      ,[Size]
      ,[SizeUnitMeasureCode]
      ,[WeightUnitMeasureCode]
      ,[Weight]
      ,[DaysToManufacture]
      ,[ProductLine]
      ,[Class]
      ,[Style]
      ,[ProductSubcategoryID]
      ,[ProductModelID]
      ,[SellStartDate]
      ,[SellEndDate]
      ,[DiscontinuedDate]
      ,[rowguid]
      ,[ModifiedDate]
   INTO Production.ProductBackup
  FROM [AdventureWorks2016CTP3].[Production].[Product]

 

Now that we have a our table that we want to convert to a temporal table, let’s suppose that AdventureWorks wants to raise the price of all products in the Accessories product sub-category. The below will get that initial data and confirm that our expression is right prior to updating the actual data. It also will provide us with the knowledge of how many rows we should expect to see changed.

 

SELECT P.[Name] AS ProductName, P.ProductNumber, P.ListPrice, CAST(P.ListPrice *1.10 AS DECIMAL(9,2)) AS NewListPrice ,PC.[Name] AS ProductCategoryName, PSC.[Name] AS ProductSubCategoryName
FROM Production.ProductBackup AS P
INNER JOIN Production.ProductSubcategory AS PSC ON P.ProductSubcategoryID = PSC.ProductSubcategoryID
INNER JOIN Production.ProductCategory AS PC ON PC.ProductCategoryID = PSC.ProductCategoryID

WHERE PC.[name] = 'Accessories'
ORDER BY ProductCategoryName, ProductSubCategoryName

 

Here is a partial screenshot of the results. Twenty-nine rows were returned.

 

AdventureWorks ProductSubCategory Accessories
Price increase for Accessories product subcategory

A sometimes recommended practice for temporal tables is to create a separate schema to hold the history table, so we’re going to do that.

/*
Create a new schema to hold the history table
*/
CREATE SCHEMA History;

 

Enabling Temporal Tables In SQL Server

Now we will set up system-versioning for our existing table. Two new columns need to be added to the source or “temporal table.” These must be specified as DATETIME2 and can either be hidden or not. If the keyword HIDDEN is used, then these two columns do not show up at all when this table is queried. You might not want the columns to appear in query results because you might be running SELECT * against the table, even though you know not to do that in a production system, right? Also, if the columns are added as HIDDEN, then you don’t have to account for these columns when you do INSERTs.

Once the table has been altered as needed with the new columns, and we’ve either done what I did here and specify defaults for those columns or explicitly defined them and NOT NULL. We also need to add the  PERIOD FOR SYSTEM_TIME statement. After that, there is one additional ALTER TABLE statement to turn on system versioning.

/*
Enable SystemVersioning on existing table
*/

ALTER TABLE [Production].[ProductBackup]
  ADD SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START HIDDEN
         CONSTRAINT DF_SysStart DEFAULT SYSUTCDATETIME()
      , SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END HIDDEN
            CONSTRAINT DF_SysEnd DEFAULT CONVERT(DATETIME2, '9999-12-31 23:59:59.9999999'),
        PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime);

ALTER TABLE [Production].[ProductBackup]
SET (SYSTEM_VERSIONING = ON(HISTORY_TABLE = History.ProductBackup));

What Happens When You Modify Data In a Temporal Table?

Let’s update some rows to populate the history table.

/*
Update some rows to populate the history table. 
We will raise the price of everything in the Accessories category by 10%.
Should be 29 rows changed.
*/

BEGIN TRAN
  UPDATE Production.ProductBackup
  SET ListPrice = CAST(ListPrice *1.10 AS DECIMAL(9,2))
  FROM Production.ProductBackup AS P
  INNER JOIN Production.ProductSubcategory AS PSC ON P.ProductSubcategoryID = PSC.ProductSubcategoryID
  INNER JOIN Production.ProductCategory AS PC ON PC.ProductCategoryID = PSC.ProductCategoryID
  WHERE PC.[name] = 'Accessories';
  
COMMIT TRAN

 

Let’s examine one method to see the current price in the source table and the previous price from the rows now in the history table. This will also show the system start and end times for those rows. This date/time range shows when this rows was valid and present in the source table.

 

/*

One method to show the old data and the new data
*/
SELECT PB.[Name] AS PBName, PB.ListPrice AS CurrentListPRice, 
PBH.ListPrice AS PreviousListPrice, PBH.SysStartTime, PBH.SysendTime
FROM Production.ProductBackup AS PB
INNER JOIN History.ProductBackup AS PBH ON PB.ProductID = PBH.ProductId --AS PB
--FOR SYSTEM_TIME ALL 
WHERE PBH.ProductID IN
(
/*These were the 29 IDs I knew we had changed because we returned them in the previous SELECT query
*/
707,708,711,842,843,844,845,846,847,848,870,
871,872,873,876,877,878,879,880,921,922,923,
928,929,930,931,932,933,934
)
ORDER BY PBH.ProductID;

 

Here is a partial screenshot of the results. This table indicates that these rows existed in the source table between August 24th at 04:01:00 and August 24th at 04:01:29.

 

Viewing SQL Server Temporal Table History
A view of SQL Server temporal table history

 

Data Retention for SQL Server Temporal Tables

One additional thing to consider when setting up temporal tables is data retention for the history table. There are several methods to manage data retention, but I think the simplest is to set a retention policy on the table itself and let SQL Server handle the data cleanup. By default the history table will store data indefinitely.

First, check to see that data retention is enabled in the database by querying the column is_temporal_history_retention_enabled column in sys.databases. If it is not enabled, then use the below to do that, remembering to set the database context for your environment within the script.

ALTER DATABASE [<myDB>]
SET TEMPORAL_HISTORY_RETENTION ON

To alter an existing temporal table so that history retention is being managed you can do the below. Keep in mind that values with DAYS, WEEKS, MONTHS, and YEARS are all valid for history table retention.

ALTER TABLE Production.ProductBackup
SET (SYSTEM_VERSIONING = ON (HISTORY_RETENTION_PERIOD = 9 MONTHS));

 

Next Steps to Take

  1. Head over to the MS Docs to learn more about temporal tables in SQL Server.
  2. If you need to know WHO made a change as well as what was changed then check out this post from Aaron Bertrand.
  3. Look for subsequent posts from me that will cover more about temporal tables.
  4. If you would like help with anything in this post, or with something else related to SQL Server, reach out to me here, or on Twitter, and I’ll be glad to offer assistance.

What I Learned From Giving My First Presentation

I recently was thrust into a conundrum. I suddenly found myself in need of a speaker for a recent SQL Server Meetup this month. My co-organizer and I had one week to find a presenter, set up a new Meetup, and then publicize so people outside of Meetup would know about it. I decided that was too short notice to go out to  Twitter and ask for volunteers to speak.

I thought, “What could I talk about?” You see, presenting is something I’ve always wanted to start doing. I just never did it. For many years, I thought that I didn’t know enough and that presenting was for the “famous” SQL Server people to do and I should just wait on the sidelines. However, over the last couple of years I’ve felt more confident about what I know.

Alos, I’ve come to realize that even people starting out have things they can share. For example, presentations can just be a series of short talks and work problems that a person has solved. Everyone has those examples. you don’t have to Brent Ozar, Kendra Little or Grant Fritchey to share problems you’ve solved at work. Presenting that information helps the presenter grow and helps the people hearing the content. People in the audience might hear a new, inventive way to resolve an issue, or they might have this problem in the days immediately after the presentation and be helped by having recently heard someone’s explanation of how they solved it, even if the solution wasn’t “ingenious”.

Almost out of nowhere I had an idea. I could talk about features in modern SQL Server that make certain aspects of working with the product easier. These could be enhancements like brand new features and feature sets or things like enhancement to T-SQL. I put together a rough list of items to include. There were close to 20 things on the list. As I worked through the presentation, I narrowed the list to 12 features or enhancements. After about 11 hours building slides, doing research, creating and testing demos and another 2 hours practicing giving the presentation, I was ready. You can see the results of that work here.

However, this post isn’t about the content of the presentation. I want to share what I learned as the result of putting together and giving the presentation. Without further delay, let’s look at the bullet points that follow.

 

  • Be careful what you eat before presenting. You want to focus on your presentation and not your queasy stomach or your heartburn, etc.

 

  • I re-learned where Presenter Notes are in the slides. I had given a presentation many months ago internally to my company. It was on finding good resources for training and during that process, had to look up presenter’s notes for PowerPoint. So much time had gone by since then, that I had to look it up again. In the lower right corner of PowerPoint there is an icon with an ^ with four dashes under it and the word “Notes” next to it. Click that and you get a small window below your slide that you can type your talking points into.

Rather than fill up your slides with text that you end up reading, use this Presenter’s Notes area to make short notes about the ideas you want to explain. Or you could type out what you’re going to say into paragraphs in this area. When you run the slideshow, you will see the slide and your Presenter’s Notes on one screen and your audience will see just the slides.

PowerPoint Presenters Notes

 

  •  I learned there is a Presentation Coach feature in PowerPoint that lets you practice your presentation and provides a feedback report. To use this feature, inside PowerPoint select the Slide Show menu at the top and then “Rehearse With Coach”. This feature gives you feedback about things like your use of filler words, how fast you’re speaking and the tone of your voice.

 

  • I learned there is a built-in Design suggestions feature for your slides in PowerPoint. Once I had my content and Presenter’s Notes mostly the way I wanted it, I realized that I needed to do something about the drab looking slides. I hadn’t started with a template of any kind and now I needed help making it visually interesting.

At the top of the Power Point menu ribbon I clicked on the word Design. Then off to the far right I saw a button called “Design Ideas”. I left clicked that and a new set of slide options opened up down the right side. I clicked on the ones I thought would look ok, keeping in mind certain key things. First, when designing slides you will need to avoid bright or pastel type colors that hurt the eyes. Second, these color choices also often make the text hard to read. As I left clicked on options down the right side, the slides changed and I could see what my presentation would look like.

PowerPoint Presentation Design Ideas Menu

 

  • I learned new information about SQL Server as I researched some topics I was less familiar with. I was less familiar with some topics I was planning to cover than others. As a result, I had to do some research to make sure I understood how the feature worked. In this way, the experience of presenting forced me to learn more about SQL Server, and that’s a good thing!

So, that’s what I learned from giving my first public presentation. I hope this encourages you to start presenting and makes things a little easier for you when you decide to give it a try.

 

Next Steps to Take

  1. Consider some problems you have solved at work and give some thought to formalizing what you learned by sharing that information with others.
  2. Look up a SQL Server Meetup group and join one. Start attending and think about how your experience can help others.
  3. If you would like help with anything in this post, or with something else related to SQL Server, reach out to me here, or on Twitter, and I’ll be glad to offer assistance.

What I’ve Been Doing Recently

Dear readers, it’s been a little while since my last blog post. Life is busy and so I’ve been away for a little bit. I am hoping to get back into the swing of things with more blogging though.

Here’s what I’ve been up to.

  1. Keeping a SQL Server Meetup going. I’ve been working with my co-organizer, Anthony Fortner, to discuss group details and get speakers lined up. I’ve had some difficulty with that recently and I need to go back to the community on Twitter and probably review Sessionize to find some additional people who regularly present.
  2. Finding and transitioning to new employment. Finding new work feels like a full time job. There is a lot of energy that goes into the process. I found a new home about six weeks ago.
  3. Because of the new job I landed, I’ve been figuring out a plan of attack for learning the necessary things about SQL Server in AWS. I’ll likely provide a list of those resources and my plan in a separate blog post.
  4. Watching the 6 part series on Azure SQL from Anna Hoffman and Bob Ward. Coming soon there will be a blog series on their videos. I may use these videos and the Azure SQL learning paths on Microsoft Learn to pursue the DP-300 certification.
  5. Continuing to take care of myself.  I’ve stuck to my routine of taking Saturdays off as described here.  I’ve been watching parts of the NBA Finals with my family and that has been fun and relaxing.
  6. I built and gave my first recorded, public presentation on SQL Server. You can find that here.

I would love to hear from you. Reach out to me via email, leem@leemarkum.com, or on Twitter.

Steps I Took To Handle Burn Out

I want to put a disclaimer at the beginning. I am not a mental health professional. Do not construe anything I say to be professional mental health advice. I am simply telling my story.

I’ve always wanted to be a high impact person. I want my life to count for something. I think most people want that. In the United States, where the idea of “living the American dream” is prominent, this desire to make a difference and to live the dream often leads to people living life by going as fast as possible. By late 2020 I found myself doing that. I found myself going in too many directions.

  1. Trying to start an IT business.
  2. Work a full time  IT job.
  3. Study for an IT certification.
  4. Write 2 blog posts a week.
  5. Began answering questions on technical forums.
  6. Dealing with the stress of a global pandemic.
  7. I moved out of the house I raised my kids in for 17 years just one day after my birthday.

That’s a tough combination. I was excited and full steam ahead on all of those things for several months in 2020. In December I took vacation time for the last two or so weeks of the year. The plan was to throw myself even more into launching the business, studying for the 70-764 and blogging.  However, what I discovered on those first few days of vacation was that I just couldn’t make myself do any of it. I realized I was exhausted.

You’ve heard the phrase about burning the candle at both ends to describe people who are really working hard and staying busy in life. I felt like the latter half of 2020 I hadn’t just been burning the candle at both ends, I’d been trying to melt the whole thing with a blow torch! I was burned out – mentally spent. I also began to wonder if I was experiencing symptoms of depression as well.

So, what did I do?

Making Time for Down Time

 

I rested. That “working vacation” in December 2020 turned into a couple of weeks of doing a whole lot of nothing. I actually took a vacation from my problems and my stress. No writing blog posts or studying for certifications, and I didn’t really work on developing the business either.

I also eventually decided to take a consistent day off. For a few weeks in 2020 I was getting up at 7:30 AM on Saturday and Sunday to start work on blogging and business strategy. I would work for 4 or so hours every Saturday and Sunday morning, then I would take a break for awhile and go do something else. During that time I would frequently return to work on blogging that evening. Even when I decided not to get up early on Saturday and Sunday, I was still dedicating the first few hours of both days to blogging and technical work. Essentially, I was working 7 days a week.

Many years ago I worked two jobs and had only 2 days a month off. That’s right – 2 days a month!  I did that for 5 years and was elated when I no longer had to do that. I remember how dropping down to one job changed me quite rapidly, and for the better. Going back to that was not something I wanted to repeat, but I still knew that I would need to put in some effort on my own time to accomplish any of the goals I had.

As a result, I’ve started taking off all day on Saturdays. For probably 2-3 months, I haven’t done anything technical on Saturdays. On Sundays, I sleep in like a person would normally do on their day off and then I do probably 2-4 hours of technical work. This has made a big difference in my mental energy levels. Our bodies and minds were not designed to be going, going, going. We need down time. We need recreation.

Re-focused My Certification Efforts

I had been working hard on the 70-764 SQL Server certification. As I worked through that material, I realized I was learning, or re-learning, a lot of things that I probably wasn’t  going to use any time soon. I had previously been studying for the AZ-900 and switched to the 70-764 because I knew the deadline for that certification was coming. With this new realization, I switched back to the AZ-900 some time in early to mid-January because I could see more potential for using what I was learning. I’m happy to say that in early March I passed the AZ-900! Three weeks later I passed the DP-900 and I’m working on the DP-300.

De-prioritizing My Business

This seems like a bad idea for my business, but it is the choice I made. Remember what I said earlier? I was doing too many things at once. Something had to give.

From March 2020 until early December 2020 I had been working hard on business plans and talking to friends in and out of the tech industry to get input. In October 2020 I incorporated the business, currently just named Lee Markum LLC. It is a tech company focusing on helping people with their SQL Server performance issues. I paid a local web developer a small amount of money to make some changes to the website because those changes desperately needed to be made and I wanted to focus on other aspects of the business.

I contacted a company in Canada who works with SQL Server professionals to update websites. The price was a bit of a steep investment so I wanted to wait until I had more cash on hand. I also made some software purchases around the time of the Black Friday sales in November 2020.

The one thing I hadn’t wanted to do was to try to start this business on my own, but that’s what ended up happening. I invited some friends to launch it with me, and for a few months there was interest. However, what I realized was that while I was full steam ahead on this because the idea of this business had been brewing in my head and heart for about 10 years, the timing just wasn’t right for my friends to join me in the adventure. I have no hard feelings toward my friends. As I said, the timing just wasn’t right for them to be involved in the ways I envisioned.

So, the business went on hold for a bit. I needed to get a detail about my Doing Business As name straightened out anyway. Thanks to the slow grinding wheels of state government, that issues wasn’t resolved until sometime in March 2021.

No Blogging or Technical Forums

I didn’t intentionally decide to take a break from blogging and answering questions on forums. This just sort of happened, but I think it was for the best. Blogging takes a fair amount of time and energy. I took the time off in December, but in January I still found that I only had the mental capacity to do my full time technical job as a SQL Server DBA. Compiling blog ideas and  creating drafts still occurred, but that’s as far as I made it and so that is why I have had only 3 posts this year. I plan to get a queue of posts written so I can post at least once a week. I spent my time in Feb/March focused on the two Azure tests I took so I could get them out of the way.

So that’s a big part of what has been going on with me and my various projects. The previous year has been hard on all of us. I am certain most everyone would like to forget that 2020 happened, and we’re all coping the best way we know how.

Next Steps to Take

  1. Assess your well-being. Think deeply about how you’re truly doing right now.
  2. Don’t be afraid to seek professional help.
  3. Watch this presentation from Tracy Boggiano on “Mental Health in IT” presented at PASS this last Fall.
  4. Check out this post from SQL Server guru Kendra Little where she gives advice to her 20 year old self.
  5. Also read here where Kendra talks about experiencing burn out for herself and what she did.

Top 3 Most Popular Posts of 2020

So, I realize it’s now almost April of 2021 when I’m posting this. Normally this sort of post would come out in January for the previous year. However, I have a reason for this being late and I’ll actually post about that separately.

Showing My Blog Some Love

In mid-2020 I decided to give my blog more attention. there were a number of reasons for this. First, I’d already spent a fair amount of money over the years on hosting and related things so I didn’t want to waste that. Second, I really wanted to try to start giving back to the SQL Server community. I have been the recipient of a lot of learning and help through a number of regular, and occasional, bloggers. I have also purchased and mostly read my fair share of SQL Server books, which was another way I had been helped by the community. Third, I wanted a place to record my own troubleshooting and learning so when I needed to do something again, I would have a record of what to do. By the way, I have seen this repeatedly listed as a reason that someone blogs.

Blog Posts by Month

So, here is the break down of the number of posts by month for 2020:

  • April – 1
  • July – 1 This was the last week of the month and marks my decision to start blogging more consistently.
  • Aug – 3
  • Sept – 4
  • Oct – 4
  • Nov – 7 – I had built a good backlog of posts that were ready to publish and nearly reached my goal of posting twice a week for the entire month
  • Dec – 4

To get the list of the most popular blog posts I looked at Google Analytics PageView data for Jan 1, 2020 – Dec 31, 2020.

Popular Post # 1

The most viewed page based on this data had 354 views and was part 1 of a series on PowerShell for the DBA. The series was meant to show some “getting started” type techniques so that someone could open PowerShell and start exploring what the tool could do for them as a DBA. I explored how to find commands that might be of interest and how to use the help system in PowerShell to figure out how to use the command. I also provided some practical examples of PowerShell to use in every day scenarios.

https://leemarkum.com/archive/2020/11/introduction-to-powershell-for-the-dba-part-1/ 

Popular Post #2

The second most viewed page had 332 views and covered 5 different ways to make a SQL Server backup. Backups are so important, especially in the world of data. They give you and your business a way to recover data that has been lost or damaged in some way. As a Database Administrator, you need to be able to recover and that all starts with taking backups. That seems obvious but I still read stories of businesses that aren’t even backing up their databases. If you do have backups, you also need also to test whether you can use those backups to actually restore data.

https://leemarkum.com/archive/2020/09/5-ways-to-make-sql-server-backups/ 

Popular Post #3

The third most viewed post had 273 views and was about using a new feature in SQL Server 2019. This feature is based on the polybase technology that arrived in SQL Server 2016. However, the feature has now been extended to allow SQL Server to have external tables to ole db and odbc sources, such as SQL Server, MS Access, and, yes, even things like Oracle and IBM iSeries/AS400. The external, or virtual table, uses mostly built-in drivers. With this feature yo can run SELECT statements against remote data sources without a linked server. The environment I’m in has an AS400/IBM iSeries so I wanted to see if I could get the feature to work with that data source.

https://leemarkum.com/archive/2020/04/querying-an-as400-using-sql-server-2019-data-virtualization/

There you have it. I hope you enjoyed these posts in 2020, and if you haven’t seen them, give them a click and read through the information. I hope you learn something from the posts and if you have questions or comments reach out to me here, or on Twitter, and I’ll be happy to talk to you.

My Availability Group Database Isn’t Synchronizing

I currently manage 5 AlwaysOn Availability Groups. Two are on SQL Server 2014 and overdue for an upgrade, while three of them are on SQL Server 2017.  From time to time I have run into a couple of different situations that I needed to troubleshoot and I want to tell you where to look and what to check on in these scenarios. I can’t possibly tell you about everything that could go wrong, but I can tell you about my experience with AlwaysOn Availability Groups and let you decide if that experience helps you or not.

A Single Database Is Not Synchronizing

If a single Availability Group database is in the “Not Synchronizing” state, what would you do?

There are several things to check. First you could look in SQL Server Management Studio to see if data movement has been suspended. If it has then you will see an icon similar to two pipes || over the database icon.  You will also see messages in the Error Log about data movement being suspended. If this is the case, try to resume data movement. This can be done in SSMS by right clicking the database in the Availability Databases folder and choose “Resume Data Movement”. Alternatively, you could execute the below T-SQL.

ALTER DATABASE [YourDatabase] SET HADR RESUME

Beyond that you can also use PowerShell to resume data movement for the database.

<#
gcm is the alias for Get-Command. Below I'm using wildcards to find any commands with the wor "resume" in it
and specifying that I want PowerShell to look in the DBATools module.
#>
gcm *resume* -Module DBATools

<#
Once I have found a command I think might help me, then I run the alias Help for Get-Help
to find out how to use the command

#>

help Resume-DbaAgDbDataMovement -Detailed

#Resumes data movement on db1 and db2 to ag1 on sql2017a. Prompts for confirmation.

Resume-DbaAgDbDataMovement -SqlInstance sql2017a -AvailabilityGroup ag1 -Database db1, db2
    

 

Second, check the SQL Server Error Log for errors related to the AG and check the  AlwaysOn_health xEvent that is running by default for AGs. To check the default health session for AlwaysOn Availability Groups, open SSMS and navigate to Management > Extended Events > Sessions > AlwaysOn_health. Then, click the plus sign next to it in order to drill down to see the file. Right click the file and choose “View Target Data”. After doing that, you can filter and search the file for relevant entries.

Extended Events AlwaysOn Health

If none of these things gives you enough information to resolve the problem, you could also restart the SQL Server service on the secondary that is showing the database that isn’t synchronizing. Again, there are many ways to do this, and I’m going to encourage you to look at the DBATools module in PowerShell to find an automated way to handle this. As a last resort, you could remove the database from the AG and then re-add it to the AG.

None of the Availability Group Databases Are Synchronizing

What if all of the databases are in the “not synchronizing” state, what would you do?

This is a little different because in this case every database on at least one secondary is in the “Not Synchronizing” state. The prior scenario was only about a single database with a synchronization issue.

The primary thing I check in this scenario is whether the cluster service is running on the secondary. If it isn’t, then none of the databases will synchronize. I have seen this happen several times, particularly after secondary servers have been rebooted. You could also have a network disconnect between this replica and the primary. Again though, the SQL Server Error log or the AlwaysOn_health extended events session will show you this. Additionally, you can look in the Cluster Events section of the Failover Cluster Manager for one of the nodes in the AG. In there you may find that you have lost quorum for some reason.

If for some reason you need to get down into the fine details of what is happening with the failover cluster, then check out the following path in the Windows Event Viewer.

Event Viewer > Applications and Services Log > Microsoft > Windows > FailoverClustering. In that location are three logs called Diagnostic, DiagnosticVerbose and Operational.

Next Steps To Take

  1. Go to YouTube and find videos on Availability Groups.
  2. Consider taking a class from the fine folks at SQLSkills about Failover Clustering and Availability Groups.
  3. Reach out to somebody if you have questions. If you would like help with anything in this post, or with something else related to SQL Server, reach out to me here, or on Twitter, and I’ll be glad to offer assistance.

 

SQL Server Query Store Overview

Query Store came out in SQL Server 2016. It was the next evolution of Microsoft’s attempts to provide valuable performance data. In this post you’re going to learn a bit about the history of native monitoring in SQL Server, why you should be using the Query Store, as well as how to view the information made available in SQL Server Query Store. This post is just an overview meant to get you to jump into using the feature. This feature is rich enough that someone can present on it easily for an entire day.

A Very Brief History of Native Monitoring in SQL Server

Query Store came out in SQL Server 2016. It was the next evolution of Microsoft’s attempts to provide valuable performance data. With SQL Server 2005, the Dynamic Management View and Dynamic Management Functions were a giant leap forward in terms of providing data about what was going on inside SQL Server. Performance tuning with those objects opened up a whole new world of insight. The “Waits and Queues” methodology for performance tuning rapidly sprung up.

The Data Collection and Management Data Warehouse feature, new in SQL Server 2008, collects both server and database level performance metrics. Deployment of this feature is possible on both a single instance of SQL Server and across multiple instances. That data could then be centralized and viewed through a series of reports.

Extended Events also came out in SQL Server 2008. Despite being a lighter weight replacement for SQL Server Profiler, Extended Events was a bit unwieldy and there was no UI. Adoption was slow.

In SQL Server 2012, Microsoft built a UI for Extended Events, and that helped improve adoption.  Extended Events opened up a wealth of information that Profiler cannot offer.

Then in SQL Server 2016, the Query Store came on the scene. Now that we’ve arrived at this feature, let’s talk about it!

Why Use SQL Server Query Store?

 

  1. Query Store solves the challenge of persisting performance data over time.

The feature collects and persists information about query performance by gathering the queries that are ran in a particular time interval, their query plans and aggregated runtime performance data for those queries. To do this natively, you would need to run extended events continuously and capture a lot of data. Then that data would need to be persisted and done so in a way that allows for easy analysis later. Capturing query text, the plans and other data about query performance isn’t free, even with something like Extended Events. Also, query plans are pushed out of memory as time goes on and,  of course, every time you restart the SQL Server valuable performance data is lost. Query Store solves this problem.

2. Persisted data in Query Store allows for detecting issues related to query plan changes.

The feature allows the DBA to track queries across time and displays all the different query plans associated with the query, up to the default of 200 execution plans per query. When a plan changes, the Query Store reports bubble up this information. The information also includes aggregated runtime data associated with each plan. Detecting this sort of issue without Query Store is much more challenging.

3. With Query Store you can see query performance across time and from a variety of perspectives.

Microsoft added query wait statistics to the feature in SQL Server 2017. Bar graphs in the Query Wait Statistics report show categories of wait types. Queries that contribute to that wait type, such as CPU, Memory, Lock etc. ,   are visible in the report. The DBA then has valuable insight into waits experienced by queries thanks to Query Store. This report can be based on Total Wait Time, like below, or Avg, Min, Max, and Standard Deviation. Left clicking on the wait category shows the queries contributing to that wait type.

 

Example report for Query store Wait Statistics report

With the built-in Query Store reports, the DBA can see expensive, poor performing queries with the highest CPU, duration, logical reads and more. This data can also be based on totals, averages min/max and standard deviation. By doing so, Query Store offers insight into the most expensive, painful queries that are hurting performance during any particular time slice and across days of history retained in the Query Store.

4. You can view missing index recommendations for expensive queries  using Query Store. Now, you shouldn’t go out and blindly apply the recommendations, but this does offer you the chance to analyze existing queries and the missing index recommendation to see if index cleanup and consolidation needs to happen. This process can benefit not only the query with the missing index recommendation, but other queries in the workload as well.

How To Enable the SQL Server Query Store

Enabling the Query Store is simple enough with two lines of T-SQL, but there are some defaults and options that you should understand before you enable the feature. Not all defaults are going to be desirable and you’ll learn which ones you should adjust as we progress. I will be showing you this in SQL Server 2017.

Below is what the query store UI looks like when initially opened and Query Store is not enabled. To get to this, open SSMS and connect to a SQL Server that is at least version 2016. then drill down into the databases folder and right click a database name and left click the “Properties” option. Then left click Query Store on the left in the “Select a Page” area.

Initial Query Store SSMS
Initial Query Store Screen Before Enabling

Now, when you left click on the drop down menu for Operation Mode(Requested) you get options for Read Only and Read write. To enable the query store and have it capture performance data, select the Read write option. Now you have a screen that looks like the below. Please note the two options I have bracketed in red. You will want to change these options and we will discuss that as I explain what these options do.

Query Store Defaults
Defaults for the Query Store

Query Store does not write collected data synchronously, but asynchronously. The “Data Flush Interval (Minutes)” value controls how long collected query data sits in memory before being written to disk. There is a trace flag that controls whether the data is written to disk prior to a shutdown as well as one that controls whether the data in Query Store is loaded first before a database comes online.

The “Statistics Collection Interval” also known as Interval_Length_Minutes, determines the time period over which aggregations of the data occur. Valid values are (1, 5, 10, 15, 30, 60, 1440). The smaller the value, the more granular your data will be and the more space that will be used by the feature.

The defaults for the following options should be changed. “Max Size (MB)” controls how large the Query Store can be and works in conjunction with a couple of different settings. The default size is 100 MB, and that is simply too small to store data on SQL Servers with much of a workload. The recommended setting for this is 2048 MB from Erin Stellato of SQLSkills.com. That is a starting point and size can be monitored from there.

The setting “Query Store Capture Mode” in SQL Server 2017 defaults to “All”. This captures even queries with virtually no impact on performance metrics like CPU, duration, logical reads, etc. It is unlikely that you need to capture every single query ran. It is probably better to set this to “Auto”.

The amount of space used by this feature is also influenced by the Size Based Cleanup Mode setting. The default is auto and it should be left that way. When the storage size for Query Store gets close to the maximum set in Max Storage Size (MB), then a clean up process kicks in, removing older data.

The space used is also influenced by the “Stale Query Threshold (Days) ”  setting. The default is 30 days and this controls how much query performance history is persisted. The default is a good starting point and depending on needs, you may want to keep more so you can see changes over a longer period of time. You might choose to do this if you really have no Dev or QA environment to speak of at your employer so most development is done in production. You know you shouldn’t be doing that, but I also know that many DBAs don’t work in the ideal, perfect environment.

As mentioned earlier, the “Wait Statistics Capture Mode” is available and on by default. If you leave it on, which is recommended, you get the benefits discussed earlier. You also will have a larger footprint for your Query Store so this is another option that influences size.

Once you have all of these options configured, simply press OK in the SSMS dialog.

If you prefer to use T-SQL, below is a sample script to enable this feature and change the two settings discussed above that are not great defaults.

USE [master]
GO
ALTER DATABASE [MyDB] SET QUERY_STORE = ON
GO
ALTER DATABASE [MyDB] SET QUERY_STORE (OPERATION_MODE = READ_WRITE, MAX_STORAGE_SIZE_MB = 2048, 
QUERY_CAPTURE_MODE = AUTO)
GO

Next Steps to Take

  1. Consider your current monitoring solutions and consider making Query Store part of your approach to performance tuning.
  2. If you like reading books, Tracy Boggiano and Grant Fritchey have a great book on Query Store.
  3. If you would like help with anything in this post, or with something else related to SQL Server, reach out to me here, or on Twitter, and I’ll be glad to offer assistance.

Measuring Your DBA Skills

Over the last 9 weeks I took you on a journey of skills and career topics related to being a SQL Server DBA. We looked at the Production DBA. We saw skills and career topics from the beginning to mid-career to Senior DBA. Then we looked at the Development DBA and their skills and career development needs. Finally there was a wrap up post.

To make it easier for everyone to get to these posts, I decided to bring them all together on a single page.

measuring your skills

 Production DBA Skills Years 0-2

Production DBA Skills Years 2-4

Production DBA Skills Years 4-7

Senior Production DBA Skills Years 7+

Development DBA Skills Years 0-2

Development DBA Skills Years 2-4

Development DBA Skills Years 4-7

Senior Development DBA Skills Years 7+

Series wrap-up

If you would like help with anything in this post, or with something else related to SQL Server, reach out to me here, or on Twitter, and I’ll be glad to offer assistance.