Learning How to Log Ship

For my first real post, I wanted to share my experience with Microsoft SQL Server’s feature “Log Shipping.” This was one of my first major implementations and I am pretty proud of what it has become. I will not explain log shipping in this post, as that may be for another time. If you would like to learn more about log shipping, and how it works, check out the link below!

Books online


My implementation was to provide a solution that would give users the ability to generate reports off a database that was refreshed with data more frequently than 24 hours. And that is all I can really tell you to avoid losing my job. So, here are the details (from a high level, removing sensitive information) on my setup. 

Primary Server (also acting as the monitoring server)
Database: theRetroDBAdb
LS_Alert job set to 120 min
LS_Backup job set to every 30 min

Secondary Server
Database: theRetroDBAdbLS
Standby Mode Enabled with “Disconnect Users When Restoring” active
LS_Copy job runs every 30 min
LS_Restore job runs every hour, on the hour

Backup Server (accessed through the network)
Holds all backups created through maintenance plan
Under the “theRetroDBAdb” folder, two folders were created (Copy, Backup) and the files would go into their respective folders. 

Lessons Learned

  1. Remove the LS_database from any backup maintenance plan, this includes log,difs, and fulls
    • This was something that I did not anticipate being a problem, but after a colleague mentioned removing the log shipped db from the existing maintenance plan, it resolved on of our issues with the log shipping chain breaking. I had been told that the log shipped dbs were ignored, as long as the primary db was backed up. Well, by removing the log shipped db from the maintenance plan, we no longer ran into issues…until VEEAM backups were executed
    • I decided to reach out to the community to shed more light on the question regarding excluding the LS db form maintenance plans. You can follow the responses in this Stack Exchange question.
  2. If you have VEEAM backups, make sure application-aware processing handles logs (transaction logs) with COPY ONLY.
    • Another obscure issue that we ran into was due to VEEAM backups taking place and breaking the log chain. Luckily, my Senior DBA had read something prior to my pilot of the “Application-Aware Processing” feature in VEEAM. Under this feature was a setting that handled the processing of the transaction logs on db servers. The setting was set to “Process transaction logs with this job”.  
    • What we needed was “Perform Copy Only” which allows SQL Server to manage the transaction logs. Now, if we did not have log shipping on this server, I am not sure it would have been an issue. However, once we changed this setting we no longer saw log chain breakage over the weekened when the VEEAM backups would take place. 
  3. Set the alert to twice the restore timetable 
    • This is an optional thing, but I have noticed that some servers would kick off the alert because there was not a valid log backup to apply, but the chain was not broken. If I let the process continue, eventually the alerts would stop once a valid log backup was found. This caused a few panics and then quick realization that nothing was technically wrong with log shipping, it was just doing its job to inform me that no restore had taken place. Since our restore jobs were set to 1 hour, I adjusted the alert to 2 hours. This way, if something was actually wrong and the alert was triggered, I probably needed to look at it! 
  4. Seperate your data and log file (optional)
    • I chose to have the LS copy to go to separate drives than the default data file path, as I did with the log.
    • Again, this was more specific to our infrastructure, but I can see the benefit of having the log shipped copy and log file on seperate drives than the secondary’s default location. For my company, it made sense due to space allocation restraints and the ability to delete/remove the disk drive log shipping was shipping to in order to save space for the host in an emergency. Since our log shipped dbs are used for reporting, they can be removed in the case of a storage emergency. However, I think it is overall a great practice to separate, if possible, your log shipping copies from the other databases housed on our secondary server. 
  5. Entertain various restore time lines (30min, 1 hour, 12 hours, etc) 
    • We were not given a specific increment that these restore jobs needed to take place. The current solution we had in place, which utilized a SSIS package, created full backups and restored the database on the secondary server every 24 hours. Anything better than 24 hours was a win, so we started with 30 min. The 30 min restore ran into some contention as other backup jobs were writing to the backup server the log shipping copy and restore jobs were pulling from. So, we moved it up to 45 min and then an hour to make life easier for our reporting team to relay to our users the downtime (for restoring) for the datasource. Even though the restore process takes litteral seconds, as it’s just restoring a log backup, it was necessary to inform the users if they tried to run a report at the top of the hour that it would more than likely fail.

And that does it for this post. Have suggestions of your own? Or, do you have any good log shipping stories you would like to share? Post a comment below!

EXEC sp_firstpost;

Well, we are here! The first official post of theRetroDBA site, and I still haven’t given up on this yet. How about that for the “cancel culture” boomer?

All jokes aside, welcome to the site! However you got here, I am overly thrilled to have you. And welcome trolls, please feel free to comment on my blog posts. I am more than happy to have a nice glass of bourbon and laugh at the time you have wasted to try and seem like you are cool.

Anywho, my name is Justin and I have too many hobbies. Funny enough, they all seem to connect in some way. Thus, here we are! You can find more about me in my “About Me” page. To fund these hobbies, I work during the day as a Database Administrator using the power of google to search the community for ways to save myself from failure, and learning from a talented Senior DBA who likes to throw me into the deep-end for fun. 

While this post will not specifically have a “theme”, many of the future posts will. It is in my DNA to try and organize everything-so you will get used to my antics pretty quickly. Some posts will be about providing the community with tips/tricks/guides on how to do something in the database realm that I found helpful/useful. Or I will post something regarding video games or the collection. 

So join me in this adventure, which I am sure will be a fun ride. Hopefully you can take something away after reading some of my work. , and subscribe to me through your favorite RSS readers! Yes-they still have them and I am still holding on to the best way to consume text media on the internet. 

Live Long and Prosper