Solving pressure with an NLB

It has been a long time since I posted anything. Regardless, life has been busy. I am currently creating a workshop/office in my basement and a woodshop in my garage which consume most of my time. Not to mention being a parent to two kids and two pets. However, lets talk about Microsoft’s built in Network Load Balancer and how it can be combined with SSRS Scale-Out to effectively balance your reporting load.

Growing pressure from the reports

I had the pleasure of deploying a SSRS Scale-Out setup for work recently, only to find that Microsoft SQL Server Reporting Services does not, inherently, load balance. In Microsoft’s defense, it technically “is.” The kicker is that its separating “scheduled” report generations to other servers added to the deployment, where the main Reporting Server is taking on all of those wonderful user requests. Luckily, there is another option that will help us achieve actual load balancing.

Enter, Microsoft’s NLB

Microsoft provides the NLB free in Window Server and its relatively easy to setup. For SSRS, there are a few things you have to do for it to behave as intended. First, you need to have a working SSRS Scale-Out deployment. For this example, lets call it Reporting 1 and Reporting 2.

On Reporting 1, go to the start menu and open Server Manager. From here, click “Add roles and features” to install the Network Load Balancer. Progress through the menus until you get to “Features” and click the check box for “Network Load Balancing.” Proceed to install this feature.

When the installation is complete, you should see the NLB app available in your start menu, or you can search for it. Open the application and we are one step closer to creating our NLB Cluster.

Here, you will see the main view for the NLB Manager. Click on Cluster in the menu bar and select “New.” In this window, you will enter the first server in the “host” section. So in this case, its “Reporting 1” and hit connect. If successful, you will see the Ethernet interface the server uses. Click Next.

In Host Parameters, your first server should already be displayed here. You can add the other server IPS that you wish to load balance. Ensure you have NLB Manager installed on each device before doing so. I am not 100 percent positive if you need to have this installed on all devices or not, but I figure why risk it for the biscuit. After you have all the server ips added, click next.

In Cluster IP Addresses, click add. Enter the IP address for the Cluster IP. I would stick with the same subnet as your servers, but the last IP octet should be something like “.10” if that is possible for your network. Once added, proceed to the next step.

With the Cluster IP set, it is time to put the “Full Internet Name” in. Typically this will be already determined by your SSRS site name. To clarify, if your reporting server is “reporting.lots-PROD/reports”, then the Full Internet Name will be “reporting.lots-PROD.” In Cluster Operation, select “Multicast.” Proceed by click next.

Finally, you are present with the option to setup port rules. I suggest deleting the ports selected by default. You will want to add the following ports; 1433 (SQL), 80 (HTTP), and 443 (HTTPS). Hit okay and the cluster will being doing its thing. You might become disconnected shortly, but you will reconnect.

If everything worked as it should, you will see all the servers in your cluster, with a status of Converged and green icon indicating connection status.

Now, for this NLB to work for SSRS, there are a few configurations on the server side that need to be done. I will leave that for another post, but you can find the instructions here.

Hiatus-I am back!

After some life adjustments, changing hosting platforms, and I am sure many other things, I am back!

Entrance Im Here GIF by Leroy Patterson

The focus of the site is still the same; a random DBA blogging about related topics and hobbies. Speaking of, I will be building a workshop/office in my house. Therefore, I plan on creating some time lapses of the work and posting them here (and maybe Youtube?) So stay turned for that.

So same blog, same face, same content.


Installing SQL Server Sustainably

I heard you were in need of a quick tip to help you install SQL Server on those shiny new VMs you just spun up. Oh? You have 10 to install? Well, thankfully there is an easy way for you to make that happen. It is relatively easy and does not require a third party application. However, the only disclaimer is that your target VMs all have the same resouce setup, as this method only applies when you the target VMs are identical in nature. With that out of the way….

Not Another Install Video

Fire up your install GUI on one of the VMs you plan to install SQL Server on. Go through the GUI completely, but STOP before you hit install. Yes, I get it, you do have to do some work upfront but it will save you plenty of time. I promise. There is a configuration file that is generated once you have made it to the final step before install. This configuration file will contain all the settings from your GUI walkthrough, and also includes some features that are not set by the GUI. Find that config file, copy it somewhere and skim through all the settings to make sure you have everything set.

*This may be a good time too look at some of the settings you do not recognize to become more familiar with them. The good news, however, most of these settings can be changed after install if needed.

So, now, with this config file, if you have not already guessed, you can use to install with the same settings on all of your new servers with ease. The GUI will still load and make you walk through some of the steps, but it cuts down on the time to finish. If you have any AD service users set to run the SQL Server services (which you should anyways as it is BEST PRACTICE), you will need to re-enter the password as the config file does not store the password for security purposes.

Now just do that, about 20 times

And there you have it, a quick tip for installing SQL Server on multiple servers that did not require you to install a third party app in your environment for it to work.

*Not Included

So you didn’t know, eh?

Something new I was recently made aware of was that Reporting Services are no longer updated with SQL Server CUs as of SQL Server 2017. What this means is that you have to download the Reporting Services installation media SEPARATELY. That, my friends, is frustrating.

My colleague found this and shared it with me, which then prompted us to upgrade our Reporting servers to the most recent version we could. Our hope is that by updating the Reporting Service, we might be able to resolve an issue that has been plaguing us for a couple of weeks.

The Problem!

Currently, we have a production Reporting site that houses a decent amount of reports. Nothing too crazy, but enough. Supposedly after we updated to SQL Server 2017 CU23, we had users complaining about their reports not loading, and then not being able to open any report in general. Now, when I say “open a report”, I mean clicking into the folder that houses the report, and then clicking the report itself. The user does not even get to the screen where they can input their parameters; they just receive a loading screen that sits there indefinitely. We threw more RAM/CPU at the server, because it should have been provisioned higher in the first place, but that did not completely solve the issue. The work around required a full server reboot, not just a services reboot, which we found odd. 

Finally, after going down the rabit hole of why Reporting Services are now a standalone install, the report server was updated to SQL Server 2017 CU24 and updated to the latest Reporting Services version for 2017. We have not received any reports of issues, but I am not sold on us resolving the issue.

Did you look at the ReportServer Log?

Yes, we looked into the logs, and yes we looked into the executionlog view (executionlog3) for details of the reports. However, the reports never actually fail, since they never actually GET to even passing the parameters. Maybe you can sense some frustration in this post?

Anyways, hoping that resolved the issue but since it has been awhile since I posted something SQL related, I figured I would share this. Full disclosure, I did not find this info-my colleague did. So thanks to he/she who will not be named due to privacy reasons. However, thank you!

Checking Who Be Active

After talking about Ola’s MaintenanceSolution for SQL Server, I wanted those new/accidental DBAs out there to make sure they add a very helpful (and wildly used) stored procedure to their toolbox.


Created by Adam Machanic, sp_whoisactive provides the user with all the current active connections running at the time of execution. This can be very useful when trying to identify what may be inflicting pain on your SQL Server or, in some cases, help you figure out who keeps executing that pesky “select * From”. It is easy to use, and is well documented. You can read the documentation here or on Adam’s Github page. He has recently moved his updates to Github, so if you want the most up to date version of sp_whoisactive, go there.

As always, I do want to point out a useful parameter that pulls the sleeping spids out as well, so you can see the connections that are still open, but just waiting for the “next task.” This is an over generalization but the idea is that these could still impact your SQL Server’s performance, even if they are not active currently.

sp_whosisactive @@show_sleeping_spids = 2

We can see the session_id 54 is in a “sleeping” status. While it is not currently querying our database server, at any time, this user could execute a new query. Where this is helpful is sometimes identifying a user or even an application, with an open connection to the server that “recently” executed the query (details in the “sql_text” column of the results). We can see system data of the performance impact the query had, and in some aspect, what to expect next time they do it again. I am over generalizing here, and I am sure some of you are screaming at the screen that I am forgeting about X detail, Y context, etc. But, for those out there not “classically” trained in Database Administration, this is what they need.¬†

Last Bit

Finally, my last tip, just make sure to install the query on the same database (whether that is your master database or a user database created for your tools) as the rest of the stored procedures you use for the maintenance of your servers. Hopefully this quick post helped someone out there. Thanks again for reading.

So You Need A Maintenance Plan?

I mean it this time when I say this will be a quick one. For most DBAs, this information is already a part of your bread and butter. But for some accidental DBAs, this information may be useful.

One of the first things that was drilled into my brain was to ensure that we had a solid maintenance plan for our SQL Servers, complete with a consistent backup schedule and troubleshooting tools to assist when a problem arose. Luckily, the SQL Community has developed many free, open source tools that are widely used across many industries. Today, we are looking at one of the most importance tool of them all.

Ola Hallengren’s Maintenance Solution

This is the first thing I deploy to my new servers after SQL Server has been installed. This sets up various jobs (backups, integrity checks, reindexing, etc) that you can easily schedule to your hearts content. The parameters are WELL documentated and the community can assist with anything you run into if you find an issue installing the plan. Out of the box, this is solid but the customization is endless. Below are the general parameters that you SHOULD look at when installing the plan, as your needs will be different than mine.

USE [master] -- Specify the database in which the objects will be created.

1) I use a separate database, like a “DBADMIN”, to store all my tools in rather than master. But if you use master, just keep it set to this.

DECLARE @CreateJobs nvarchar(max)          = 'Y'         -- Specify whether jobs should be created.

2) If this is your first install, leave this to ‘Y’. If you are updating your plan, due to an updated version pushed out by Ola, make sure to switch this to ‘N’.

DECLARE @BackupDirectory nvarchar(max)     = NULL        -- Specify the backup root directory. If no directory is specified, the default backup directory is used.

3) I strongly suggest backing up your databases to another server, and put the network path here. This will set the default backup path for your jobs. So change NULL to that path.

DECLARE @CleanupTime int                   = NULL        -- Time in hours, after which backup files are deleted. If no time is specified, then no backup files are deleted.

4) For me, 72 hours works here. But that does not mean the need of your client/your customer will be the same. Make sure to ask them! And if they don’t tell you, then maybe it is worth the effort to sit down with them to discover your RTO/RPO. (More on that in another post?)

DECLARE @OutputFileDirectory nvarchar(max) = NULL        -- Specify the output file directory. If no directory is specified, then the SQL Server error log directory is used.

5) Find a location on your server to output a seperate log of this plan. You want this, trust me. With the path, you can always take a look at the log to see why a job failed in an easier to read format ūüôā

DECLARE @LogToTable nvarchar(max)          = 'Y'         -- Log commands to a table.

6) I always leave this as “Y” as there are times where I need to see what commands were passed (in case I screwed something up) in a previous job execution. The table will be created under whatever database you have installed the plan in. 

That Wasn’t So HARD!?

After that, go ahead and execute the plan and now you have a solid maintenance plan! Go ahead and adjust the schedules to your liking, and make sure to include your operator in those notifications if the job fails! 

Changing Backup Paths, Better

In the last week of February, I think I will try to do some smaller posts every week with something new I learned or found useful while conducting my job. The goal will be to submit small tidbits of info that will provide some sort of value for those seeking it! This week, it is going to be about changing your default backup path for your database maintenance jobs!

There will come a time where you might need to change your default path for your backup jobs. For some of you, this might be a pretty straightforward task on a handful of servers.For the rest of us, it is not ideal to hop from one instance to the next, updating the default paths manually across several servers. Thankfully for everyone, our community has a solution (most of the time) for our daily struggles. And this situation is no different!

Identifying WHERE your backups are going

booty had me like dat ass GIF by August Burns Red

I know you are probably thinking, “How could ANYONE not know where your backups are going?” I will simply respond by saying please see any sysadmin that was dubbed an accidental DBA by their company and told to “just figure it out.” Plus, maybe you are someone new to the gig and really don’t have any guidance to where anything is! These things happen people! 

A quick solution would be to connect to your database servers and run the following command:

USE msdb;
SELECT  j.job_id,
FROM    dbo.sysjobs j
JOIN    dbo.sysjobsteps js
    ON  js.job_id = j.job_id
JOIN    master.dbo.sysservers s
    ON  s.srvid = j.originating_server_id
WHERE   js.command LIKE N'%BACKUP%'

This will look through system databases, identify commands where a backup path is mentioned, and output the value to you. From this info, you can identify what the job id is, servername, name of job, what step the backup command is in, and then the command itself. Unfortunately, I do not have a test server to show you the output of the command, so you will need to just trust me (or run it on your server!).

Change the backup path!

Now, the next step, with the use of the following script, makes changing default backup paths an absolute BREEZE. 

USE msdb;
DECLARE @Find nvarchar(max);
DECLARE @Replace nvarchar(max);
DECLARE @DebugOnly bit;
SET @Find = N'\\SERVERA\Backups';
SET @Replace = N'\\SERVERB\Backups';
SET @DebugOnly = 1;
IF OBJECT_ID(N'tempdb..#excludeJobs', N'U') IS NOT NULL
    DROP TABLE #excludeJobs;
CREATE TABLE #excludeJobs
    JobName sysname NOT NULL

INSERT INTO #excludeJobs (JobName)
VALUES ('The Name of a job you want to skip');

IF OBJECT_ID(N'tempdb..#deets', N'U') IS NOT NULL
DROP TABLE #deets;
    JobName sysname NOT NULL
    , StepName sysname NOT NULL
    , OldCommand nvarchar(max) NOT NULL
    , NewCommand nvarchar(max) NOT NULL
    , PRIMARY KEY (JobName, StepName)
DECLARE @JobName sysname;
DECLARE @StepName sysname;
DECLARE @StepID int;
DECLARE @Command nvarchar(max);
DECLARE @NewCommand nvarchar(max);


        , sjs.step_name
        , sjs.step_id
        , sjs.command
    FROM dbo.sysjobsteps sjs
        INNER JOIN dbo.sysjobs sj ON sjs.job_id = sj.job_id
    WHERE sjs.command LIKE N'%' + @Find + N'%' ESCAPE N'|' COLLATE SQL_Latin1_General_CP1_CI_AS
        AND sj.enabled = 1
        AND NOT EXISTS (
            SELECT 1
            FROM #excludeJobs ej
            WHERE ej.JobName =
        , sjs.step_name;

    OPEN cur;
        , @StepName
        , @StepID
        , @Command;
        SET @NewCommand = REPLACE(@Command, @Find, @Replace) COLLATE SQL_Latin1_General_CP1_CI_AS;
        INSERT INTO #deets (JobName, StepName, OldCommand, NewCommand)
        SELECT JobName = @JobName
            , StepName = @StepName
            , PriorCommand = @Command
            , NewCommand = @NewCommand;

        IF @DebugOnly = 0
            EXEC dbo.sp_update_jobstep @job_name = @JobName, @step_id = @StepID, @command = @NewCommand;
            PRINT N'Updated ' + @JobName;
        FETCH NEXT FROM cur INTO @JobName
            , @StepName
            , @StepID
            , @Command;
    CLOSE cur;

    SELECT *
    FROM #deets;

    IF @@TRANCOUNT > 0
        PRINT N'Transaction rolled back';

Lets focus on the parameters you can modify, which make this script super helpful. In the “SET @Find” parameter, insert the backup path you have already identified either on your own or via the script I provided before. Next, you will want to add in the “SET @REPLACE” parameter the NEW default path you would like to update to all of your backup jobs. Finally, and this is my favorite part, is the “SET @debugOnly” parameter. By default, I leave this set to 1 as it will show you the output of running the script without actually changing the default path.  Once you have set your parameters, run the script and take a look at the details provided. In the results, you can see the JobName, StepName, OldCommand, NewCommand, OldOutputfile, and NewOutputfile.  By running the script in debug mode, you can verify if your new paths are correct. Once you are confident in the new path, set the debug mode to 0, and execute the script. I recommend running the script twice, as the second execution should come up blank signifying that your change was made successfully.

Do the backup jobs run successfully with the new path?

Since I cannot account for every situation, I will speak plainly. I strongly encourage that you run a system database backup just to verify that the backups can write to the new path. The last thing you want to do is make changes to the backup paths, and then fail to verify if the database server can even write to the new path. My guess is you will receive an unwanted call when the database server starts sending emails out saying backups are failing….

The script I provided above was not my own to be fair, and you can find the author’s write up here

Find-and-Replace for SQL Server Agent Jobs – SQL Server Science

Until next week! 

Non-Sysadmins can run SPs Too!

One of the things I have been focusing on lately is improving the processes that are currently in place and to automate tasks to give myself (and my colleague) more time focusing on larger projects. Most recently, I began writing some knowledge base for our help desk that would walk agents through basic troubleshooting steps. The hope was to have them gather some information up front (active connections, blocking sessions etc) to help speed up the resolution time for us. Since the agents are not system admins, they lack the necessary rights to gather certain data…or do they?

Parks And Recreation Reaction GIF

The following articles ( Brent Ozar and Adam Machanic) walk through how to create “certificates” that, through steps you can follow through, will allow non-sysadmins the ability to execute specific stored procedures. In my example, I will use sp_whoisactive (sp for activity monitoring), but ANY stored procedure will work.

Creating A Certificate

USE master 

WITH SUBJECT = 'Who is Active', 
--START_DATE = 'If you want the cert to activate at a certain time, enter it here'
EXPIRY_DATE = '9999-12-31' 

Great, now we have the certificate created, we can move on to creating the user from the certificate.

Creating The User

CREATE LOGIN sp_whoisactive_login

Simple enough right? The next bit is to grant permissions, and since I want this to be as “tied down” as much as possible, we are going to grant the login VIEW SERVER STATE. This will grant the ability to return server state information from Dynamic Management Views and Functions, that can be used to monitor the health of a server and diagnose issues. I do not want the agents to have the ability to anything more than that. More on Dynamic Management Views and Functions can be found here.

Granting Permissions

TO sp_whoisactive_login

With the necessary rights applied, we are ready to move to the final step. We will need to add the signature to the actual stored procedure (sp_whoisactive in this case).

Adding A John Hancock

WITH PASSWORD = 'This is the same password you used for the certificate creation' 

I should mention, it would help if you have sp_whoisactive installed first before you execute the last steps. But after that you are done!

Final Remarks

Huge shout out to Brent and Adam for providing the great details on how to utilize this “feature” in SQL Server, and making it easy to implement. As I said earlier, you can really implement this with any stored procedure. In a future post, I will be showing off a simple powershell script that will help you keep track of updating any of your “third-party” stored procedures that you have implemented into your arsenal across multiple servers. Until next time, thanks for reading, leave a comment below, or reach out to me if you have any questions!

Learning How to Log Ship

For my first real post, I wanted to share my experience with Microsoft SQL Server’s feature “Log Shipping.” This was one of my first major implementations and I am pretty proud of what it has become. I will not explain log shipping in this post, as that may be for another time. If you would like to learn more about log shipping, and how it works, check out the link below!

Books online


My implementation was to provide a solution that would give users the ability to generate reports off a database that was refreshed with data more frequently than 24 hours. And that is all I can really tell you to avoid losing my job. So, here are the details (from a high level, removing sensitive information) on my setup. 

Primary Server (also acting as the monitoring server)
Database: theRetroDBAdb
LS_Alert job set to 120 min
LS_Backup job set to every 30 min

Secondary Server
Database: theRetroDBAdbLS
Standby Mode Enabled with “Disconnect Users When Restoring” active
LS_Copy job runs every 30 min
LS_Restore job runs every hour, on the hour

Backup Server (accessed through the network)
Holds all backups created through maintenance plan
Under the “theRetroDBAdb” folder, two folders were created (Copy, Backup) and the files would go into their respective folders. 

Lessons Learned

  1. Remove the LS_database from any backup maintenance plan, this includes log,difs, and fulls
    • This was something that I did not anticipate being a problem, but after a colleague mentioned removing the log shipped db from the existing maintenance plan, it resolved on of our issues with the log shipping chain breaking. I had been told that the log shipped dbs were ignored, as long as the primary db was backed up. Well, by removing the log shipped db from the maintenance plan, we no longer ran into issues…until VEEAM backups were executed
    • I decided to reach out to the community to shed more light on the question regarding excluding the LS db form maintenance plans. You can follow the responses in this Stack Exchange question.
  2. If you have VEEAM backups, make sure application-aware processing handles logs (transaction logs) with COPY ONLY.
    • Another obscure issue that we ran into was due to VEEAM backups taking place and breaking the log chain. Luckily, my Senior DBA had read something prior to my pilot of the “Application-Aware Processing” feature in VEEAM. Under this feature was a setting that handled the processing of the transaction logs on db servers. The setting was set to “Process transaction logs with this job”.  
    • What we needed was “Perform Copy Only” which allows SQL Server to manage the transaction logs. Now, if we did not have log shipping on this server, I am not sure it would have been an issue. However, once we changed this setting we no longer saw log chain breakage over the weekened when the VEEAM backups would take place. 
  3. Set the alert to twice the restore timetable 
    • This is an optional thing, but I have noticed that some servers would kick off the alert because there was not a valid log backup to apply, but the chain was not broken. If I let the process continue, eventually the alerts would stop once a valid log backup was found. This caused a few panics and then quick realization that nothing was technically wrong with log shipping, it was just doing its job to inform me that no restore had taken place. Since our restore jobs were set to 1 hour, I adjusted the alert to 2 hours. This way, if something was actually wrong and the alert was triggered, I probably needed to look at it! 
  4. Seperate your data and log file (optional)
    • I chose to have the LS copy to go to separate drives than the default data file path, as I did with the log.
    • Again, this was more specific to our infrastructure, but I can see the benefit of having the log shipped copy and log file on seperate drives than the secondary’s default location. For my company, it made sense due to space allocation restraints and the ability to delete/remove the disk drive log shipping was shipping to in order to save space for the host in an emergency. Since our log shipped dbs are used for reporting, they can be removed in the case of a storage emergency. However, I think it is overall a great practice to separate, if possible, your log shipping copies from the other databases housed on our secondary server. 
  5. Entertain various restore time lines (30min, 1 hour, 12 hours, etc) 
    • We were not given a specific increment that these restore jobs needed to take place. The current solution we had in place, which utilized a SSIS package, created full backups and restored the database on the secondary server every 24 hours. Anything better than 24 hours was a win, so we started with 30 min. The 30 min restore ran into some contention as other backup jobs were writing to the backup server the log shipping copy and restore jobs were pulling from. So, we moved it up to 45 min and then an hour to make life easier for our reporting team to relay to our users the downtime (for restoring) for the datasource. Even though the restore process takes litteral seconds, as it’s just restoring a log backup, it was necessary to inform the users if they tried to run a report at the top of the hour that it would more than likely fail.

And that does it for this post. Have suggestions of your own? Or, do you have any good log shipping stories you would like to share? Post a comment below!