It’s been a while…

So I know I have blogged in a while and that is bad on my part. I have been very busy with the new job that blogging just doesn’t make the list.  It has also been a struggle figuring out what I can share and what I should share. Well, I finally have found something I can share that has been plaguing me for a while and I think will be beneficial to share.

I run a Mac to do my daily work. I will run VMs or RDP sessions to Windows boxes so I can do SQL work. I have been struggling with trying to be more efficient and not having to use Windows if I do not have to. Plus, if I can stay on the Mac side, this will aid me in my current goal to learn Linux, containers, and other database platforms. To be able to use my Mac to do more SQL Server related work, I need to be able to connect to various different SQL Servers to run queries. I have an environment where I will connect to a development domain for most of my work and a production domain for things that I do that are customer facing. I also want to be able to use Windows Authentication to connect to these various SQL Servers. This has proven to be difficult on my Mac as I do not have my Mac joined to a domain.

I have heard of other members of the community doing this, but I kept struggling. I had read this blog post several times, but nothing worked.

Then it dawned on me and I realized my error. To use Windows Authentication, you must have Kerberos working. This was the missing link. To have Kerberos working you have to have a kerberos ticket and the server you want to connect to must have a Service Principal Name (SPN) created for it.

First, I made sure to run the Microsoft Kerberos Configuration Manager for SQL Server. You can download that here. This should be run from a Windows box that is joined to the domain. When you run this against one of your SQL Servers, it will scan your Active Directory to see if an SPN exists for the SQL Server. If there is an issue, you will see the below


Under the status column, you can see that it says Missing. That means that an SPN is not created for this SQL Server. The Kerberos Configuration Manager will provide you a script to fix this for you. You will want to provide this script to your Domain Admin so they can run it and correct the issue. Once this has been run and an SPN has been created, you can use Kerberos for Windows Authentication. If you don’t have an SPN, Windows will fall back to NTLM, but in the Mac world, you cannot fall back to that and thus, cannot use Windows Authentication.


Second, after the SPN is in place, open up a terminal and run

kinit username@DOMAIN.COM

username will be what you currently use to log in to your Active Directory Domain, while should be the domain name, but it must be in ALL CAPS.

You will get asked to authenticate by inputting your password. If everything is correct, you can run the next statement of

klist username@DOMAIN.COM

You should get similar output below
Credentials cache: API:BA74140D-7B6E-49B5-93E1-EA069AC87B20
Principal: chris.lumnah@RANGERS.LAB

Issued Expires Principal
Mar 20 20:05:58 2019 Mar 21 06:05:52 2019 krbtgt/RANGERS.LAB@RANGERS.LAB

Now you can use Azure Data Studio to connect to your SQL Server on Windows using Windows Authentication.

New job…new thinking?

So I recently started a new position as a SQL Server Solutions Architect at Rubrik Inc. We concentrate on data protection of your data center. I will be concentrating on SQL Server specific protection. Hit up the website or search YouTube for some videos if you want to know more.

Backup and recovery has always been something I am interested in. It was the first skill I learned when becoming a DBA. I wrote my own database backup process using SSIS before Powershell was a thing, that was highly dynamic, configurable, and most importantly simple to support. With the introduction of the Data Domain, I learned how to optimize the performance of a backup and restore. Over my career, I have learned that backup and recovery and data protection is probably the single biggest job the DBA has.

In my new role, it is expected of me to keep current and more importantly learn new things. SQL Server 2016 introduced a feature called Stretch Database. Essentially, this feature allows you to take data in a table or tables and put part of your data in Azure and part stays local. Conceptually, this works similarly to table partitioning but mechanically it is different. Table partitioning separates your table into partitions based on a query function. All of your data is in the table, it is just reorganized into those partitions so queries can run more efficiently against smaller subsets of data. Stretch is similar in that you have a filter query that says what data stays local and what goes to Azure. The big difference here is that the data going to Azure actually leaves the local database.

Now, why would you do this? A good example of why, may be a large sales order system. For the most part, your queries may run against the most recent data. By keeping that data local, queries can return fast. Data that needs to read that old data will have to reach into Azure to get it. Stretching a database allows you to place that old data upon cheaper storage. Yes, this data will probably take longer to query, but that is by design. Place the old, less used data on cheap storage to free up the local faster storage for more often used data.

I started to think about the implications of backup and recovery. How this affects RPO and RTO.


Thankfully, to back up a database that has been stretched to Azure existing native backup scripts do not need to change. This sounds great, I can offload a bunch of data to the cloud and nothing has to change from a backup and recovery perspective.

Well not so fast.

When you do a native SQL backup, SQL Server will perform what is called a shallow backup. This means that the backup will contain only the local data and eligible for migration at the time the backup runs. Eligible data is the data is not yet migrated, but will be migrated to Azure based on the query function.

So what happens to the data that has been moved to Azure? Azure will backup that data for you automatically with storage snapshots done every 8 hours.

See this link for more details.


So I have a backup or really backups of my data, how do I do a restore? Well it depends….Where is the failure?

If the failure you are recovering from is local,

  1. Restore your local database as you normally would from a native SQL backup.
  2. Run the stored procedure sys.sp_rda_reauthorize_db to reestablish the connection between the local database and the Azure stretch database

If the failure is in Azure,

  1. Restore the Azure database in the Azure portal
  2. Run the stored procedure sys.sp_rda_reauthorize_db to reestablish the connection between the local database and the Azure stretch database

While the above looks simple, it can be bit more involved as you walk through the details  here.

Ultimately, stretch database is a cool piece of technology that makes you think differently. It forces you to think more of a recovery strategy than a backup strategy. You have to test out how to restore this type of database to understand the steps involved. You want to have a run book that shows what you need to do to recover a database that is involved with stretch. You want to plan out your backups to most effectively give you a successful recovery. Essentially, you need a recovery strategy, not a backup strategy.


Hmmm, sounds like something Paul Randal has been saying for a while. So basically the same thinking, just reenforced.

TSQL Tuesday #96: Thanking those that helped along the way

This month’s TSQL Tuesday fits into the theme of the month. Thanksgiving. The TSQL Tuesday invitation is being sent out by Ewald Cress (b|t).

There have been many people who have helped my career along the way. Most have indirectly helped via their blog posts, videos, tweets, training classes, user group topics, SQL Saturday Sessions, or PASS Summit sessions. Those people have spent countless hours to help the community learn and get better by the work they do and share. Some of these people I have had the pleasure of meeting and thanking in person, some I have not had that chance yet.

While I have crossed paths with some incredibly smart people in my career, there are those that have left a great impact.

I started my career as a developer and fell into the DBA role. The first role I had was a good starting role for a DBA. Small environment, a few databases, and exposure to both OLTP and Data Warehousing. I could learn on the job and quickly grasp the concepts of what needed to get done to keep things going relatively smoothly. However, I had no measuring stick. I had no way to determine if I was doing a good job other than no one was at my desk to complain that the data warehouse was down. I went out looking for a new position in the field to give me a greater challenge and the ability to learn the things I did not know. I went out for an interview, which I did not think went all that great. Questions were asked that I admittingly did not know the answers to. This brings me to the first person that had great impact on my career.

Scott Hitchcock

He saw that I did not know certain things, but also saw that I was hungry to learn. He decided to recommend me to his boss to hire me. Shortly thereafter I started as a Jr DBA. I just went from an environment that was basically the size of a swimming pool to one the size of Lake Ontario. He taught me tons about handling a large environment, clustering, HA/DR, how to do performance troubleshooting and tuning, how to handle changes in a good way, security, when to pull the trigger on an action and when not to, and many more things. Some of these things I had done at my previous environment, but if I ran into a wall, that was it, I could go no further as I did now know how to get past the wall. Under Scott, the wall got further away, also, when I did get to the wall, I now had someone I could ask for help. Getting stuck was now a delay to me, not an all-out halt. I learned so much during my 4 years working with him. He allowed me to ask questions when needed, and work through an issue on my own when needed. Most importantly, when I screwed up he’d course correct.

Rick Heiges and Larry Chestnut

I entered the world of consulting which is like stepping onto an alien world. No longer was I exposed to one environment which its unique set of challenges. I was now being exposed to a different environment each week, with a different set of challenges and requirements. I now must adjust my thinking each week to solve a different set of problems and a new set of rules. I was thankfully paired up with two architects that would guide me through various challenges. They elevated my thinking to a higher level from the super low-level details of a project to a higher-level thinking of an architect. They taught me to understand the bigger picture and apply that bigger picture knowledge to the low-level details that I was accustomed. They also both taught me how to present the work I was doing at different client sites to their different audiences. At every client, there are always two audiences. The first being the day-to-day worker, their in-house DBA or other IT staff. The second being management, whether C-level or other. While both audiences appreciate the level of detail I will go to in solving a problem, they are not necessarily both interested in hearing all those details. The IT worker may want to, so they can understand what is going on and resolve the issue at hand, but the management level does not. They want to know how fast the issue can be fixed and how much impact the solution will have. The management level needs to be less technical and detailed as the IT staff. I know feel comfortable talking through a solution with either audience due to these two.

Mike Hillwig and Andy Mallon

Neither of these two people I have had the pleasure of working with, yet still both have had impact on my career. While both are highly skilled DBAs, their impact comes from my interactions with them at user group meetings. While working as a consultant, I had to learn to present, but secretly I did not like doing so. I love doing the work, but hate being the center of attention. I would much rather be behind the scenes making things awesome than in the front of everyone. I am pretty sure Mike and Andy both understand this about me, yet don’t care. Generally, when I go to a user group meeting, I will sit in the back and listen in, somewhat participate in the discussions, and work on something I was dealing with at the office. One of these times, Mike was looking for speakers for upcoming user group sessions. He was telling everyone that we all have a story to tell and can share something with the group that will help members of the group. He then went on to say he is going to pick someone randomly to do a lightening talk at the next meeting. He stopped what he was doing, put his hands on the table and then just proceeded to stare at me, until I looked up. I was his random pick. As he says, I was volun-told to present. He later volun-told me to speak at SQL Saturday Boston.

While Mike’s involvement with the user group has lessened, Andy’s has picked up. He has worked tirelessly to make Boston SQL into a strong user group, while also becoming the organizer of the SQL Saturdays in the Boston area. While these two things alone are enough to say thank you for, Andy is always pushing me to present more or blog more. I have had many private conversations with Andy that have left me thinking and inspired to do more. He has also made me realize that while part of me still hates presenting, not out of fear, but out hatred of being the center of attention, another part of me likes it. The part of me that likes it, is the part of me that likes to help people. I like to see others learn and get better. It is a great feeling when you can help someone and see that person grow.

Each person in this post has had great impact on my career, and I cannot thank them enough. Each person has helped me grow and get better over the years. Scott imparted great technical skills. Rick and Larry added to and sharpened those technical skills and helped me translate them into ways different audiences could understand. Mike and Andy got me to get out of my head and start to give back.

Automatic Seeding Error

In my lab, I decided to play around with the automatic seeding functionality that is part of Availability Groups. This was sparked by my last post about putting SSISDB into an AG. I wanted to see how it would work for a regular database. When I attempted to do so, I received the following error:

Automatic Seeding Error
Cannot alter the availability group ‘Group1’, because it does not exist or you do not have permission. (Microsoft SQL Server, Error: 15151)

This seemed odd to me. What would cause this error? What changed in my environment?

First, I realized that I did break down my AG and recreate it in management studio. When I created the group, I did not put any databases into the group, so I cold test automatic seeding.

Next, I decided to go through the process and instead of using the GUI, I scripted out the commands and would run manually. This way I could pinpoint where the process was failing.

:Connect sql2016n1

USE [master]




USE [master]


ADD DATABASE [WWI_Preparation];


:Connect sql2016n2



When I ran the above code, the steps that were to run on SQL2016N1, succeeded, but those that were to run on SQL2016N2, failed. Additionally, I received the below message:

Msg 15151, Level 16, State 1, Line 24
Cannot alter the availability group ‘Group1’, because it does not exist or you do not have permission.

I went and looked at the AlwaysOn High Availability folder in SSMS on SQL2016n2 and found nothing.


So, the group cannot be given permissions to create any database, because the group did not exist. Then it dawned on me as to why.

Creating an availability group, without databases using the GUI, only creates the group on the primary. It does not reach out to the secondary replicas to join those nodes to the group. This must be done manually. So to resolve the error, I had to run this code on SQL2016N2:

USE [master]

This allowed the secondary replica to join the AG that was established on SQL2016N1. Then I was able to proceed with the next step of running

USE [master]

Once this was completed, the database started seeding from one node to the other.

I think the biggest takeaway from this is the importance of using a script to do DBA actions instead of the GUI. It allows for a more focused and deliberate execution, where you can see things happen in order. This, in turn, teaches you and allows for greater flexibility, when things go bump.

Enabling AlwaysOn for SSISDB

I am at client where I get to do some fun work. I get to work on a brand new environment that is dedicated to SQL Server. It has some pretty cool specs. Biggest thing is that they are going all 2016. Both the OS and SQL Server will be 2016. This has let to a lot of fun and interesting discussions. One of those being how do we place the SSIS Catalog into an availability group.

This led me to ponder the possibilities. When AlwaysOn first came out, this was not supported. Sure, you could find clever work-arounds for this, but at the end of the day, they were not sanctioned by Microsoft. This would then leave the client in a bit of a bind if they ran into issues. However, starting in SQL Server 2016, this is now supported. Microsoft has made it possible to officially place the SSISDB into an availability group.

Which group you put the SSISDB in, will be a design question that you will need to answer for yourself. In short, it should go into a group that makes sense. If all your databases are in one group, then placing this in DB into the same group makes sense. If you have multiple groups, you may need to make a choice that best suits your application needs.

In this example, I will assume you already have an availability group set up with a listener. In my lab, I have a very simple setup. my availability group is called Group1 and my listener is called Listener1.

This video shows the steps I took to add the SSISDB into an availability group. I even show publishing a SSIS package to the listener name instead of a SQL Server Name.

In case you do not want to watch the video, below is the process I did to add the SSISDB into the availability group.

  1. On the Primary Replica, open SQL Server Management Studio
  2. Right click on Integration Services Catalog, click Create Catalog
  3. On the Create Catalog Dialog,
    Check the box to enable automatic execution of Integration Services stored procedures at SQL Server startup. Then enter a strong password.
  4. Open the Availability Group folder and navigate to the group you want to add the SSISDB to. Right click on Availability Database and click Add Database.
  5. Due to the encryption that the SSISDB uses, you will need to provide the password you used when creating the catalog. Put that password into the password field, click refresh and then click the check box next to SSISDB. Click Next
  6. On the Add Database to Availability Group dialog, ensure that Automatic Seeding is selected. Click Next.
  7. You will be presented with a list of the secondary replicas, connect to each one listed. Then click Next
  8. The information will be validated. As long as there are no failures, click Next.
  9. On the summary dialog, click Finish. SSMS will add the database to the availability group
  10. Click Close
  11. Once the database has been added to the availability group, go to the Integration Services Catalog folder, right click and click Enable AlwaysOn Support. (you may need to refresh this folder before the option is available to you)
  12. Click connect next to each secondary replica. Then click OK.

The SSISDB is now established in an availability group and will failover based on the group rules.

Spring Cleaning

The birds are singing, the grass is able to be seen, the air is warm, the sun is shining, more importantly, baseball has started. I finally get to go enjoy some down time at America’s most beloved ballpark. Fenway Park for the non-New Englanders. Yes, Spring is finally here and with Spring generally, comes some cleaning up. I have a VM lab that I run on my Surface Pro 4. Nothing crazy, just a bunch of VMs to test out different versions of SQL  in different scenarios. Recently I had to build up a 3 node cluster to test with Windows and SQL Server 2016 Always On Availability Groups for a client. The tests required me to tear down and install SQL Server multiple times while I tested out a set of scripts to install and configure SQL, and later test out failover scenarios. More on that later though.

As I was going through my environment, I realized I created a new domain controller for my tests. This DC has a new name and domain name which is different from my other VMs. I quickly realized that this will cause me issues later with authentication. No worries. I will just boot up the VMs and then and join them to the new domain. Easy-peasy. Now let met go test out my SQL Servers.


I received a login failure with access is denied. Using Windows Authentication with my new domain and recently joined server is not working. Why?…..Oh right, my new user id does not have access to SQL Server itself. As I sit there smacking myself in the head, I am also thinking about the amount of time it will take me to rebuild those VMs. Then it hit me!!!

Powershell to the rescue!!!

More specifically, dbatools to the rescue.

Now if you have not heard of dbatools, you have to check this out. Head over to and look around on the site. This set of Powershell cmdlets, started out as a project by Chrissy Lemaire to make database migrations super simple. Now with a couple of cmdlets, she can migrate an entire server and sit back enjoying a beer. Now, with the help of a few MVPs and input from the SQL Community, this has evolved into a fairly comprehensive set of cmdlets to aid DBAs everywhere in their day-to-day tasks. I highly recommend checking this out.

One of the cmdlets in dbatools is a command called Reset-SQLAdmin. This is a super useful script. You point this at a SQL Server and it will either reset the sa account or add an account to your SQL Server with sysadmin access. It is meant for you to use as a tool when you have lost access to your SQL Server. This fits my needs perfectly. The command is as easy as running the below command.

Reset-SqlAdmin -SqlServer sqlserver\sqlexpress -Login ad\administrator

That is it. It will go through its process and after a few seconds, you will have access to your SQL Server. Want to see it in action? Check out the video on the site. One thing to note, this process does require an outage as it will stop and start your SQL Services. So do not use this in production. 

So back to my issue….I ran this against my 6 VMs, and voila, I am in. I had access to my old VMs with little to no effort what so ever. This saved me hours of time and effort. Now I am able to reorganize the VMs in my lab and clean up old users and database.

Powershell and dbatools making things easy-peasy!



My WTF….Really? Moment

So, to join into this month’s #tsqltuesday88, we are to write about a “WTF” moment. Mine is not directly database related, but it is related.

I was presenting at the NESQL User Group monthly meeting. I was giving a lightning talk on Powershell for a DBA. (See…somewhat related.) It was not meant to be a long presentation. At most about 15 minutes. I was coasting along talking about the enhancements made by Microsoft and the community when all of a sudden my machine would not accept any input from my keyboard or mouse. I have a Surface Pro 4, that suddenly decided to stop talking to the keyboard. Now, I was not about to let this stop me. Yes, it slowed me down, but stop me no…well not yet. I tried to use the touch screen. Yay…it was working. So I tried to continue on using the touch screen but then realized that I could not. There is no F8 key or way to highlight text, then copy and paste the text.

After a few minutes of trying to live troubleshoot, I had to give in to the Demo Gods. They have not smiled favorably upon me this day. I took my Surface and went to my seat and started looking around at what was going on. Windows Update hit me earlier in the day and had not properly finished. It was acting up and I had to hard stop it. I purposely had not wanted to do any updates just to prevent this from happening. I even made sure my demos were practiced and typing would not be needed. I remember the tip I have heard many times, never type in a demo)

After several reboots and a firmware update, and more reboots, everything was working fine. All I could say on this day was WTF…Really?

Now to put together the blog post on the talk I was going to give and figure out what my next talk will be on. Maybe give this talk at the RI SQL User Group.