Changing Backup Paths, Better

In the last week of February, I think I will try to do some smaller posts every week with something new I learned or found useful while conducting my job. The goal will be to submit small tidbits of info that will provide some sort of value for those seeking it! This week, it is going to be about changing your default backup path for your database maintenance jobs!

There will come a time where you might need to change your default path for your backup jobs. For some of you, this might be a pretty straightforward task on a handful of servers.For the rest of us, it is not ideal to hop from one instance to the next, updating the default paths manually across several servers. Thankfully for everyone, our community has a solution (most of the time) for our daily struggles. And this situation is no different!

Identifying WHERE your backups are going

booty had me like dat ass GIF by August Burns Red

I know you are probably thinking, “How could ANYONE not know where your backups are going?” I will simply respond by saying please see any sysadmin that was dubbed an accidental DBA by their company and told to “just figure it out.” Plus, maybe you are someone new to the gig and really don’t have any guidance to where anything is! These things happen people! 

A quick solution would be to connect to your database servers and run the following command:

USE msdb;
SELECT  j.job_id,
    s.srvname,
    j.name,
    js.step_id,
    js.command,
    j.enabled
FROM    dbo.sysjobs j
JOIN    dbo.sysjobsteps js
    ON  js.job_id = j.job_id
JOIN    master.dbo.sysservers s
    ON  s.srvid = j.originating_server_id
WHERE   js.command LIKE N'%BACKUP%'

This will look through system databases, identify commands where a backup path is mentioned, and output the value to you. From this info, you can identify what the job id is, servername, name of job, what step the backup command is in, and then the command itself. Unfortunately, I do not have a test server to show you the output of the command, so you will need to just trust me (or run it on your server!).

Change the backup path!

Now, the next step, with the use of the following script, makes changing default backup paths an absolute BREEZE. 

USE msdb;
DECLARE @Find nvarchar(max);
DECLARE @Replace nvarchar(max);
DECLARE @DebugOnly bit;
SET @Find = N'\\SERVERA\Backups';
SET @Replace = N'\\SERVERB\Backups';
SET @DebugOnly = 1;
IF OBJECT_ID(N'tempdb..#excludeJobs', N'U') IS NOT NULL
BEGIN
    DROP TABLE #excludeJobs;
END
CREATE TABLE #excludeJobs
(
    JobName sysname NOT NULL
        PRIMARY KEY CLUSTERED
);

INSERT INTO #excludeJobs (JobName)
VALUES ('The Name of a job you want to skip');

IF OBJECT_ID(N'tempdb..#deets', N'U') IS NOT NULL
DROP TABLE #deets;
CREATE TABLE #deets 
(
    JobName sysname NOT NULL
    , StepName sysname NOT NULL
    , OldCommand nvarchar(max) NOT NULL
    , NewCommand nvarchar(max) NOT NULL
    , PRIMARY KEY (JobName, StepName)
);
DECLARE @JobName sysname;
DECLARE @StepName sysname;
DECLARE @StepID int;
DECLARE @Command nvarchar(max);
DECLARE @NewCommand nvarchar(max);

BEGIN TRY
    BEGIN TRANSACTION;

    DECLARE cur CURSOR LOCAL FORWARD_ONLY STATIC READ_ONLY 
    FOR
    SELECT sj.name
        , sjs.step_name
        , sjs.step_id
        , sjs.command
    FROM dbo.sysjobsteps sjs
        INNER JOIN dbo.sysjobs sj ON sjs.job_id = sj.job_id
    WHERE sjs.command LIKE N'%' + @Find + N'%' ESCAPE N'|' COLLATE SQL_Latin1_General_CP1_CI_AS
        AND sj.enabled = 1
        AND NOT EXISTS (
            SELECT 1
            FROM #excludeJobs ej
            WHERE ej.JobName = sj.name
            )
    ORDER BY sj.name
        , sjs.step_name;

    OPEN cur;
    FETCH NEXT FROM cur INTO @JobName
        , @StepName
        , @StepID
        , @Command;
    WHILE @@FETCH_STATUS = 0
    BEGIN
        SET @NewCommand = REPLACE(@Command, @Find, @Replace) COLLATE SQL_Latin1_General_CP1_CI_AS;
        INSERT INTO #deets (JobName, StepName, OldCommand, NewCommand)
        SELECT JobName = @JobName
            , StepName = @StepName
            , PriorCommand = @Command
            , NewCommand = @NewCommand;

        IF @DebugOnly = 0
        BEGIN
            EXEC dbo.sp_update_jobstep @job_name = @JobName, @step_id = @StepID, @command = @NewCommand;
            PRINT N'Updated ' + @JobName;
        END
    
        FETCH NEXT FROM cur INTO @JobName
            , @StepName
            , @StepID
            , @Command;
    END
    CLOSE cur;
    DEALLOCATE cur;

    SELECT *
    FROM #deets;

    COMMIT TRANSACTION;
END TRY
BEGIN CATCH
    IF @@TRANCOUNT > 0
    BEGIN
        ROLLBACK TRANSACTION;
        PRINT N'Transaction rolled back';
    END
    PRINT ERROR_MESSAGE();
    PRINT ERROR_LINE();
END CATCH

Lets focus on the parameters you can modify, which make this script super helpful. In the “SET @Find” parameter, insert the backup path you have already identified either on your own or via the script I provided before. Next, you will want to add in the “SET @REPLACE” parameter the NEW default path you would like to update to all of your backup jobs. Finally, and this is my favorite part, is the “SET @debugOnly” parameter. By default, I leave this set to 1 as it will show you the output of running the script without actually changing the default path.  Once you have set your parameters, run the script and take a look at the details provided. In the results, you can see the JobName, StepName, OldCommand, NewCommand, OldOutputfile, and NewOutputfile.  By running the script in debug mode, you can verify if your new paths are correct. Once you are confident in the new path, set the debug mode to 0, and execute the script. I recommend running the script twice, as the second execution should come up blank signifying that your change was made successfully.

Do the backup jobs run successfully with the new path?

Since I cannot account for every situation, I will speak plainly. I strongly encourage that you run a system database backup just to verify that the backups can write to the new path. The last thing you want to do is make changes to the backup paths, and then fail to verify if the database server can even write to the new path. My guess is you will receive an unwanted call when the database server starts sending emails out saying backups are failing….

The script I provided above was not my own to be fair, and you can find the author’s write up here

Find-and-Replace for SQL Server Agent Jobs – SQL Server Science

Until next week! 

TempDB; Shrinking Responsibly

I just wanted to post this as a “lessons learned” for me, a very green DBA, as I wish I would have known this before I ran into an issue in production. Yes, production. I hear all the people screaming “You Monster! How could you!” Believe me, I was in the same boat. Thankfully, many other people have done the same!

So, let us get to the point. There will come a time when someone runs a query that bloats your tempdb (and any of the separate data files) and you begin to receive alerts. For some people, the first response should ALWAYS be to just extend the drive, then look at why your tempdb expanded to fill the drive. 

However, some of us may not be as lucky and have to find a way to make space. In this case, we shrink-responsibly!

My first move was to make sure I could even shrink the files. 

USE [tempdb];
GO
DBCC SHRINKFILE (Tempdev, 20480);----remember this number is megabytes. Make sure to be exact. 
GO
DBCC SHRINKFILE (Temp02, 20480);
GO

If all goes well, you should be good right? Some people suggest that you do not need to restart SQL Service. However, in my case I needed to as I had another piece to the problem I have not touched on yet. Regardless, I would strongly suggest if you have already scheduled the downtime to shrink the tempdb files, it would not hurt for the restart.

BUT! My problem was not a rouge transaction that expanded the tempdb files to max out the drive. No, the problem was my stupidity in a previous scheduled downtime to add more tempdb files to match the rest of our database servers. In doing so, I added TOO MANY. Yes…TOO many. So I was going back and fixing my mistake. In this case, the SQL gods were making it a point to teach me a lesson and here is why.

I had to shrink all of the files down, and delete the extra one. So I shrank the files, and went to delete the extra tempdb file. In order to do so, a file needs to be “empty” in order to be removed. So….

DBCC SHRINKFILE (LogicalFilename, EMPTYFILE);
GO

And then the problem hit me:

DBCC SHRINKFILE: Page 3|44101 could not be moved because it is a work table page.
Msg 2555, Level 16, State 1, Line 1

There are work tables present. Of course!

Now, you could try and restart the instance to see if the worktables magically go away, but I want to reduce the amount of times we restart the service. Therefore, I took to the community and came across some suggestions about clearing various caches(sp?). Let’s give it a shot!

DBCC DROPCLEANBUFFERS;
GO
DBCC FREEPROCCACHE;
GO
DBCC FREESESSIONCACHE;
GO
DBCC FREESYSTEMCACHE ( 'ALL');
GO

But life isn’t easy right? Why would it be! I tried again to empty the file and it did not work. Thankfully,  I have a FANTASTIC mentor, who suggested “Why don’t you just try to shrink it down to 1MB, do a restart, and delete it.” At this point, I was happy for suggestions. So….

DBCC SHRINKFILE (Temp05, 1);
GO

Then the restart of the service. So far so good, as the file DID shrink to close to 1MB. Let’s try to delete via SSMS…and SUCCESS! I could not believe it. I spent two nights, during off hours, to try and fix this and the answer was as simple as just shrinking the file to 1mb, and delete. 

I would like to link two wonderful articles that I found in the process of battling the tempdb, and you should definitely follow both Erin and Andy. They are fantastic people, and smart. Very smart.

Moral of the story: Don’t shrink, add space. Because, should you ever really shrink your SQL Server database? Andy has the answer for you