Tag Archives: guide

How to archive tables that contain a keyword from one database to another using T-Sql

In the previous article I wrote about how to identify and remove unwanted tables. Link

In that tutorial I suggested prefixing the tables you want to remove with _DELETE_.

I also suggested you may want to archive the tables in some manner before you delete them. This could be because there is a chance someone might come looking for the data that was in a deleted table or something might break by removing the table and you might want to put it back asap.

If you have only a few tables you could script the tables out but if you have a lot of tables that becomes a little unmanageable. Also if the tables total in size to 10 Gb the script to recreate the tables will be a lot lot larger.

An alternative method is to create an Archive database, copy the tables across to this target database and then delete the tables in the source database. You can then backup and drop the Archive database saving the .bak file somewhere cheaper.

The script below will allow you to do just that. It prints the T-Sql to do the job, it doesn’t carry out the job, so it’s completely safe to execute and review.

To use the script below create a target database.

Use the target database name for the variable value @TargetDb

Use the source database name for the variable value @SourceDb

The @KeyWord variable is used to gather all the tables that contain the string of choice, in the example below _DELETE_.

The @RemoveKeyWord variable is a flag that will remove the keyword string from the target database table name, e.g. _DELETE_Sales will become Sales.

IF OBJECT_ID('tempdb..#Table') IS NOT NULL
	DROP TABLE #Table
GO

DECLARE @RemoveKeyWord BIT
DECLARE @Id AS INT
DECLARE @KeyWord AS VARCHAR(256)
DECLARE @SourceDb AS SYSNAME
DECLARE @TargetDb AS SYSNAME
DECLARE @TableName AS SYSNAME
DECLARE @SchemaName AS SYSNAME
DECLARE @Sql AS VARCHAR(MAX)

SET @RemoveKeyWord = 1
SET @KeyWord = '_DELETE_'
SET @TargetDb = 'Archive'
SET @SourceDb = 'Source'
SET NOCOUNT ON

CREATE TABLE #Table (
	Id_Table INT IDENTITY(1, 1)
	,SchemaName SYSNAME
	,TableName SYSNAME
	);

SET @Sql = '
INSERT INTO #Table (
	SchemaName
	,TableName
	)
SELECT s.NAME
	,so.NAME
FROM ' + QUOTENAME(@SourceDb) + '.sys.tables AS so
LEFT JOIN ' + QUOTENAME(@SourceDb) + '.sys.schemas AS s ON so.schema_id = s.schema_id
WHERE so.NAME LIKE ' + '''' + '%' + @KeyWord + '%' + '''' + '
ORDER BY s.NAME ASC'

EXEC (@Sql)

SET @SchemaName = ''

WHILE @SchemaName IS NOT NULL
BEGIN
	SET @SchemaName = (
			SELECT MIN(SchemaName)
			FROM #Table
			WHERE SchemaName > @SchemaName
				AND SchemaName <> 'dbo'
			)

	PRINT 'USE ' + QUOTENAME(@TargetDb) + ';
GO
	
IF NOT EXISTS (SELECT * FROM ' + QUOTENAME(@TargetDb) + '.sys.schemas WHERE name = ' + '''' + @SchemaName + '''' + ')
  BEGIN
    EXEC (' + '''' + 'CREATE SCHEMA ' + QUOTENAME(@SchemaName) + ';' + '''' + ');
  END;
  
'
END

SET @Id = 1

WHILE @Id IS NOT NULL
BEGIN
	SELECT @TableName = TableName
		,@SchemaName = SchemaName
	FROM #Table
	WHERE Id_Table = @Id

	IF @RemoveKeyWord = 1
	BEGIN
		PRINT '
SELECT *
INTO ' + QUOTENAME(@TargetDb) + '.' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(REPLACE(@TableName, @KeyWord, '')) + ' 
FROM ' + QUOTENAME(@SourceDb) + '.' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName)
	END
	ELSE
	BEGIN
		PRINT '
SELECT *
INTO ' + QUOTENAME(@TargetDb) + '.' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName) + ' 
FROM ' + QUOTENAME(@SourceDb) + '.' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName)
	END

	SET @Id = (
			SELECT MIN(Id_Table)
			FROM #Table
			WHERE Id_Table > @Id
			)
END

How to identify and remove unused tables in SQL Server with T-Sql

In a perfect world an organisation should never get itself into the situation where tables need to be identified as functionally obsolete and removed in bulk. Schema changes should be stepped through the cycle of development, test, staging and live with developers cleaning as they go, but we don’t live in a perfect world as you may have noticed.

What do I mean by functionally obsolete? This means the tables are no longer being interacted with by user generated objects like stored procedures, functions or views. There may also be tables that could be classified as business redundant. That is they are being referenced by user generated objects frequently but they no longer have a use to the business, i.e. one job may have been replaced by another without the former being disabled or dropped. Business redundant objects are more difficult to determine and finding them may require input from multiple stakeholders.

A good approach for removing objects is to rename the objects first. This makes it easier to put the environment back the way it was if there are any problems encountered. After a set period of time if there is no impact on the overall environment script out the object then drop it. (Obviously do this in a test environment first if possible)

The script below is mostly a light weight SELECT statement that can be run on any environment. It does not execute any of the code it generates. It uses the sys.dm_db_index_usage_stats dynamic management view to determine when the tables were last interacted with. Interaction being defined as the following actions being applied against the object, update, seek, scan, lookup.

Caveat: Entries in this view reset to NULL after a Server reboot. Also the DMV has been known to be a bit unreliable with earlier versions of SQL Server with cases of the view being reset when a full index rebuild is carried out against a table. So don’t execute any code without first reviewing it.

Tables with NULL values for the fields below should be tables that have not been referenced at all or at least since the last time the server was rebooted.

LastUserUpdate
LastUserSeek
LastUserScan
LastUserLookup

Removing these objects is the low hanging fruit of cleaning up an environment. The script also provides stats on how many times these interactions happened and the size of the object. All these stats together should help you determine if an object is functionally redundant or business redundant and can be removed.

To aid further in the cleanup the script also creates the fields Action, Comments, Renamed, RenamedDate, RenameForDeletion, RestoreOriginalName, DropTable, DroppedDate.

The script output can then be copied and pasted into an Excel spread sheet and used to coordinate and track the cleanup progress.

SET NOCOUNT ON

DECLARE @Database TABLE ([DbName] [sysname])
DECLARE @DbName AS [sysname]
DECLARE @Sql AS [varchar] (max)

IF OBJECT_ID('tempdb..#TableStats', 'U') IS NOT NULL
	DROP TABLE #TableStats

IF OBJECT_ID('tempdb..#IndexStats', 'U') IS NOT NULL
	DROP TABLE #IndexStats

IF OBJECT_ID('tempdb..#TableUsageStats', 'U') IS NOT NULL
	DROP TABLE #TableUsageStats

IF OBJECT_ID('tempdb..#TableSizeStats', 'U') IS NOT NULL
	DROP TABLE #TableSizeStats

CREATE TABLE #TableStats (
	[DbName] [sysname]
	,[SchemaName] [sysname]
	,[ObjectId] [bigint]
	,[TableName] [sysname]
	,[ModifiedDate] [datetime]
	);

CREATE TABLE #IndexStats (
	[DbName] [sysname]
	,[ObjectId] [bigint]
	,[HasIndex] [bit]
	);

CREATE TABLE #TableSizeStats (
	[DbName] [varchar](255) NULL
	,[SchemaName] [varchar](255) NULL
	,[ObjectId] [bigint]
	,[TableName] [varchar](255) NULL
	,[RowCount] [bigint] NULL
	,[AvailableSpacePercentage] [numeric](6, 2) NULL
	,[UnusedSpaceGb] [numeric](10, 3) NULL
	,[UsedSpaceGb] [numeric](10, 3) NULL
	,[TotalSpaceGb] [numeric](10, 3) NULL
	,[UnusedSpaceMb] [numeric](13, 3) NULL
	,[UsedSpaceMb] [numeric](13, 3) NULL
	,[TotalSpaceMb] [numeric](13, 3) NULL
	,[UnusedSpaceKb] [bigint] NULL
	,[UsedSpaceKb] [bigint] NULL
	,[TotalSpaceKb] [bigint] NULL
	)

CREATE TABLE #TableUsageStats (
	[DbName] [sysname]
	,[ObjectId] [bigint]
	,[TableName] [nvarchar](128) NULL
	,[LastUserUpdate] [datetime] NULL
	,[LastUserSeek] [datetime] NULL
	,[LastUserScan] [datetime] NULL
	,[LastUserLookup] [datetime] NULL
	,[UserUpdateCount] [bigint] NOT NULL
	,[UserSeekCount] [bigint] NOT NULL
	,[UserScanCount] [bigint] NOT NULL
	,[UserLookupCount] [bigint] NOT NULL
	);

SET @DbName = '';

INSERT INTO @Database (DbName)
SELECT NAME
FROM master.dbo.sysdatabases
WHERE NAME NOT IN (
		'tempdb'
		,'master'
		,'mode'
		,'model'
		)
ORDER BY NAME ASC;

WHILE @DbName IS NOT NULL
BEGIN
	SET @DbName = (
			SELECT MIN(DbName)
			FROM @Database
			WHERE DbName > @DbName
			);
	SET @Sql = '
INSERT INTO #TableStats (
	DbName
	,schemaName
	,ObjectId
	,TableName
	,ModifiedDate
	)
SELECT DbName
	,SchemaName
	,ObjectId
	,TableName
	,ModifiedDate
FROM (
SELECT DISTINCT ' + '''' + @DbName + '''' + ' AS DbName
	,s.NAME AS SchemaName
	,t.object_id AS ObjectId
	,t.NAME AS TableName
	,t.modify_date AS ModifiedDate
FROM ' + QUOTENAME(@DbName) + '.sys.tables AS t
INNER JOIN ' + QUOTENAME(@DbName) + '.sys.schemas AS s ON t.schema_id = s.schema_id
LEFT JOIN ' + QUOTENAME(@DbName) + '.sys.extended_properties AS ep ON ep.major_id = t.[object_id] /*Exclude System Tables*/
WHERE t.NAME IS NOT NULL
	AND s.NAME IS NOT NULL
	AND (ep.[name] IS NULL OR ep.[name] <> ''microsoft_database_tools_support'')
	) AS rd
WHERE rd.SchemaName IS NOT NULL
ORDER BY DbName ASC
	,TableName ASC;
'

	EXEC (@Sql)

	SELECT @Sql = '
INSERT INTO #IndexStats (
	DbName
	,ObjectId
	,HasIndex
	)
SELECT ' + '''' + @DbName + '''' + ' AS DbName
	,OBJECT_ID AS ObjectId
	,IndexCheck AS HasIndex
FROM (
	SELECT DISTINCT OBJECT_ID
		,CASE 
			WHEN (
					[TYPE] > 0
					AND is_disabled = 0
					)
				THEN 1
			ELSE 0
			END AS IndexCheck
	FROM ' + QUOTENAME(@DbName) + '.sys.indexes
	) AS rd
WHERE rd.IndexCheck = 1
'

	EXEC (@Sql)

	SET @Sql = 
		'
INSERT INTO #TableSizeStats (
	[DbName]
	,[SchemaName]
	,[ObjectId]
	,[TableName]
	,[RowCount]
	,[AvailableSpacePercentage]
	,[UnusedSpaceGb]
	,[UsedSpaceGb]
	,[TotalSpaceGb]
	,[UnusedSpaceMb]
	,[UsedSpaceMb]
	,[TotalSpaceMb]
	,[UnusedSpaceKb]
	,[UsedSpaceKb]
	,[TotalSpaceKb]
	)
SELECT DISTINCT rd.[DbName]
	,rd.[SchemaName]
	,rd.[ObjectId]
	,rd.[TableName]
	,rd.[RowCount]
	,CASE 
		WHEN TotalSpaceKb > 0
			THEN ((UnusedSpaceKb / TotalSpaceKb) * 100)
		ELSE 0
		END AS AvailableSpacePercentage
	,CONVERT(NUMERIC(10, 3), (rd.[UnusedSpaceKb] / 1024.) / 1024.) AS UnusedSpaceGb
	,CONVERT(NUMERIC(10, 3), (rd.[UsedSpaceKb] / 1024.) / 1024.) AS UsedSpaceGb
	,CONVERT(NUMERIC(10, 3), (rd.[TotalSpaceKb] / 1024.) / 1024.) AS TotalSpaceGb
	,CONVERT(NUMERIC(13, 3), (rd.[UnusedSpaceKb] / 1024.)) AS UnusedSpaceMb
	,CONVERT(NUMERIC(13, 3), (rd.[UsedSpaceKb] / 1024.)) AS UsedSpaceMb
	,CONVERT(NUMERIC(13, 3), (rd.[TotalSpaceKb] / 1024.)) AS TotalSpaceMb
	,rd.[UnusedSpaceKb]
	,rd.[UsedSpaceKb]
	,rd.[TotalSpaceKb]
FROM (
	SELECT ' 
		+ '''' + @DbName + '''' + ' AS DbName
		,t.Object_id AS ObjectId
		,s.[name] AS [SchemaName]
		,t.[name] AS [TableName]
		,p.[rows] AS [RowCount]
		,SUM(a.[used_pages]) * 8 AS [UsedSpaceKb]
		,(SUM(a.[total_pages]) - SUM(a.[used_pages])) * 8 AS [UnusedSpaceKb]
		,SUM(a.[total_pages]) * 8 AS [TotalSpaceKb]
	FROM ' + QUOTENAME(@DbName) + '.sys.tables AS t
	INNER JOIN ' + QUOTENAME(@DbName) + '.sys.schemas AS s ON t.schema_id = s.schema_id
	INNER JOIN ' + QUOTENAME(@DbName) + '.sys.indexes AS i ON t.OBJECT_ID = i.object_id
	INNER JOIN ' + QUOTENAME(@DbName) + '.sys.partitions AS p ON i.object_id = p.OBJECT_ID
		AND i.[index_id] = p.[index_id]
	INNER JOIN ' + QUOTENAME(@DbName) + '.sys.allocation_units a ON p.[partition_id] = a.[container_id]
	WHERE t.[is_ms_shipped] = 0
		AND i.OBJECT_ID > 255
	GROUP BY s.[name]
		,t.[name]
		,t.[object_id]
		,p.[rows]
	) AS rd
ORDER BY DbName ASC
	,SchemaName ASC
	,TableName ASC;
'

	EXEC (@Sql)

	SET @Sql = '
INSERT INTO #TableUsageStats (
	[DbName]
	,[ObjectId]
	,[TableName]
	,[LastUserUpdate]
	,[LastUserSeek]
	,[LastUserScan]
	,[LastUserLookup]
	,[UserUpdateCount]
	,[UserSeekCount]
	,[UserScanCount]
	,[UserLookupCount]
	)
SELECT DbName
	,ObjectId
	,TableName
	,LastUserUpdate
	,LastUserSeek
	,LastUserScan
	,LastUserLookup
	,UserUpdateCount
	,UserSeekCount
	,UserScanCount
	,UserLookupCount
FROM (
	SELECT DISTINCT ROW_NUMBER() OVER (
			PARTITION BY ius.Object_Id ORDER BY last_user_update DESC
			) AS RN
		,' + '''' + @DbName + '''' + ' AS DbName
		,ius.OBJECT_ID AS ObjectId
		,o.NAME AS TableName
		,ius.last_user_update AS LastUserUpdate
		,ius.last_user_seek AS LastUserSeek
		,ius.last_user_scan AS LastUserScan
		,ius.last_user_lookup AS LastUserLookup
		,ius.user_updates AS UserUpdateCount
		,ius.user_seeks AS UserSeekCount
		,ius.user_scans AS UserScanCount
		,ius.user_lookups AS UserLookupCount
	FROM ' + QUOTENAME(@DbName) + 
		'.sys.dm_db_index_usage_stats AS ius
	INNER JOIN ' + QUOTENAME(@DbName) + '.sys.objects AS o ON ius.OBJECT_ID = o.OBJECT_ID
		AND o.NAME IS NOT NULL
	) AS rd
WHERE rd.RN = 1
ORDER BY rd.DbName ASC
	,rd.TableName ASC
	,rd.LastUserUpdate DESC
	,rd.LastUserSeek DESC
	,rd.LastUserScan DESC
	,rd.LastUserLookup DESC;
'

	EXEC (@Sql)
END;
GO

SELECT DISTINCT ROW_NUMBER() OVER (
		ORDER BY ts.[DbName] ASC
			,ts.[SchemaName] ASC
			,ts.[TableName] ASC
		) AS Row
	,ts.[DbName]
	,ts.[SchemaName]
	,ts.[TableName]
	,'' AS Action --Rename, keep etc.
	,'' AS Comments
	,'' AS Renamed --boolean flag
	,'' AS RenamedDate 
	,i.[HasIndex] --Tables without an index are heaps
	,tss.[TotalSpaceMb] AS TableSizeInMb
	,ts.[ModifiedDate]
	,tus.[LastUserUpdate]
	,tus.[LastUserSeek]
	,tus.[LastUserScan]
	,tus.[LastUserLookup]
	,tus.[UserUpdateCount]
	,tus.[UserSeekCount]
	,tus.[UserScanCount]
	,tus.[UserLookupCount]
	,tss.[AvailableSpacePercentage]
	,tss.[UnusedSpaceGb]
	,tss.[UsedSpaceGb]
	,tss.[TotalSpaceGb]
	,tss.[UnusedSpaceMb]
	,tss.[UsedSpaceMb]
	,tss.[TotalSpaceMb]
	,tss.[UnusedSpaceKb]
	,tss.[UsedSpaceKb]
	,tss.[TotalSpaceKb]
	,'USE ' + QUOTENAME(ts.[DbName]) + '; EXEC sp_rename ' + '''' + ts.[SchemaName] + '.' + ts.[TableName] + '''' + ', ' + '''' + '_DELETE_' + ts.[TableName] + '''' + ';' AS RenameForDeletion
	,'USE ' + QUOTENAME(ts.[DbName]) + '; EXEC sp_rename ' + '''' + ts.[SchemaName] + '.' + '_DELETE_' + ts.[TableName] + '''' + ', ' + '''' + ts.[TableName] + '''' + ';' AS RestoreOriginalName
	,'USE ' + QUOTENAME(ts.[DbName]) + '; DROP TABLE ' + QUOTENAME(ts.[SchemaName]) + '.' + '[' + '_DELETE_' + ts.[TableName] + ']' + ';' AS 'DropTable'
	,'' AS DroppedDate
FROM #TableStats AS ts
LEFT JOIN #TableSizeStats AS tss ON ts.ObjectId = tss.ObjectId
	AND tss.DbName = ts.DbName
LEFT JOIN #IndexStats AS i ON ts.ObjectId = i.ObjectId
	AND i.DbName = ts.DbName
LEFT JOIN #TableUsageStats AS tus ON ts.ObjectId = tus.ObjectId
	AND tus.DbName = ts.DbName
ORDER BY ts.[DbName] ASC
	,ts.[SchemaName] ASC
	,ts.[TableName] ASC

	-- REF: https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-index-usage-stats-transact-sql

How to disable all enabled user job schedules in SQL Server with T-Sql

I’d really recommend not running the output of this script on a live environment! This is just a little something I whipped up to test on a development environment.

Running the below script will output the T-Sql required to disable all enabled job schedules on the SQL Server instance. Just copy the outputted text and paste into a new SSMS window and execute to disable the jobs.

SET NOCOUNT ON

DECLARE @Schedule TABLE (ScheduleId INT)
DECLARE @ScheduleId AS INT
DECLARE @Sql AS VARCHAR(MAX)

SET @ScheduleId = ''

INSERT INTO @Schedule (ScheduleId)
SELECT s.schedule_id
FROM msdb.dbo.sysschedules AS s
WHERE s.[enabled] = 1
	AND s.owner_sid <> 0x01
ORDER BY s.NAME ASC

WHILE @ScheduleId IS NOT NULL
BEGIN
	SET @ScheduleId = (
			SELECT MIN(ScheduleId)
			FROM @Schedule
			WHERE ScheduleId > @ScheduleId
			)
	SET @Sql = '
EXEC msdb.dbo.sp_update_schedule @schedule_id = ' + '''' + CONVERT(VARCHAR(5), @ScheduleId) + '''' + '
	,@enabled = 0; 
'

	PRINT @Sql
END

How to disable all enabled jobs in SQL Server with T-Sql where the job name contains specific keyword text

Running the below script will output the T-Sql required to disable all enabled jobs on the SQL Server instance where the job name contains a specific keyword. Just copy the outputted text and paste into a new SSMS window and execute to disable the jobs.

For example, if you change the text value ‘PLACEHOLDER’, for the variable @KeyWord, to ‘Backup’ then any job which has ‘Backup’ in its name will have T-Sql generated to disable it. You could change the PRINT to EXEC but I wouldn’t recommend it.

DECLARE @KeyWord AS VARCHAR(256)

SET @KeyWord = 'PLACEHOLDER'
SET NOCOUNT ON

DECLARE @Job TABLE (JobName SYSNAME)
DECLARE @JobName AS SYSNAME
DECLARE @Sql AS VARCHAR(MAX)

SET @JobName = ''

INSERT INTO @Job (JobName)
SELECT j.NAME
FROM msdb.dbo.sysjobs AS j
WHERE j.[enabled] = 1
	AND j.NAME LIKE '%' + @KeyWord + '%' 
ORDER BY j.NAME ASC

WHILE @JobName IS NOT NULL
BEGIN
	SET @JobName = (
			SELECT MIN(JobName)
			FROM @Job
			WHERE JobName > @JobName
			)
	SET @Sql = '
EXEC msdb.dbo.sp_update_job @job_name = ' + '''' + @JobName + '''' + '
	,@enabled = 0; 
'

	PRINT @Sql
END

How to run all enabled SQL Server Jobs with T-Sql

I’d really recommend not running the output of this script on a live environment! This is just a little something I whipped up to test changes on a development environment.

Running the script will output the T-Sql required to run all jobs on the SQL Server instance. Just copy the text and paste into a new SSMS window and execute. You could change the PRINT to EXEC but I wouldn’t recommend it.

SET NOCOUNT ON

DECLARE @Job TABLE (JobName SYSNAME)
DECLARE @JobName AS SYSNAME
DECLARE @Sql AS VARCHAR(MAX)

SET @JobName = ''

INSERT INTO @Job (JobName)
SELECT j.NAME
FROM msdb.dbo.sysjobs AS j
WHERE j.[enabled] = 1
ORDER BY j.NAME ASC

WHILE @JobName IS NOT NULL
BEGIN
	SET @JobName = (
			SELECT MIN(JobName)
			FROM @Job
			WHERE JobName > @JobName
			)
	SET @Sql = '
EXEC msdb.dbo.sp_start_job @job_name = ' + '''' + @JobName + '''' + '; 
'
	PRINT @Sql
END

How to delete all SQL Server Jobs that have no maintenance plan with T-Sql

I’d really recommend not running the output of this script on a live environment! This is just a little something I whipped up to clear out a development environment.

Running the script will output the T-Sql required to deleted all jobs on the SQL Server instance. Jobs for maintenance plans are not included as maintenance plans need to be deleted first. Just copy the text and paste into a new SSMS window and execute. You could change the PRINT to EXEC but I wouldn’t recommend it. 

SET NOCOUNT ON

DECLARE @Job TABLE (JobName SYSNAME)
DECLARE @JobName AS SYSNAME
DECLARE @Sql AS VARCHAR(MAX)

SET @JobName = ''

INSERT INTO @Job (JobName)
SELECT j.Name
FROM msdb.dbo.sysjobs AS j
LEFT JOIN msdb.dbo.sysmaintplan_subplans AS p ON j.job_id = p.job_id
WHERE p.subplan_id IS NULL
ORDER BY j.Name ASC

WHILE @JobName IS NOT NULL
BEGIN
	SET @JobName = (
			SELECT MIN(JobName)
			FROM @Job
			WHERE JobName > @JobName
			)
	SET @Sql = '
EXEC msdb.dbo.sp_delete_job @job_name = ' + '''' + @JobName + '''' + '; 
'
	PRINT @Sql
END

How to get MS SQL Server job history using a stored procedure

In a previous article (see article) I provided simple queries to return the server’s job history. It’s only natural that when you find yourself running the same script over and over again eventually you’ll get around to turning it into a stored procedure.

Well today was the day.

You can use the script below to create the Stored Procedure usp_GetJobHistory.

To deploy the SP to your instance you’ll need to replace the text PLACEHOLDER with the name of the database you’ll be deploying to. Ideally you should have a utility or administration database rather than deploying to Master.

The SP has 5 parameters.

Parameter Datatype Options Definition
@DaysToPull INT Any Number How many days of history you want
working back from today
@Status VARCHAR One of the following: All’, ‘Succeeded’, ‘Failed’,
‘Retried’, ‘Cancelled’, NULL
Will limited the rows returned
based on the status of the entry
@SortOrder VARCHAR ASC or DESC The occurrence order i.e. oldest to newest
@JobName VARCHAR Any Text Used in a LIKE operator to find jobs containing that text
@ReturnSql BIT 0, 1 or NULL Returns the SQL you can modify or
run to return the history

Example command:

Create the SQL code to return the job history for jobs containing the work backup that failed in the past day sorted by newest to oldest.

EXEC dbo.GetJobHistory @DaysToPull = 1
,@SortOrder = ‘DESC’
,@Status = ‘Failed’
,@JobName = ‘Backup’
,@ReturnSql = 1

Tip: you could use this to create a SSRS report that could be published daily to notify stakeholders of any job failures.

-- REPLACE PLACEHOLDER
USE [PLACEHOLDER];
GO

IF OBJECT_ID('[usp_GetJobHistory]') IS NULL
	EXEC ('CREATE PROCEDURE dbo.[usp_GetJobHistory] AS SELECT 1')
GO

ALTER PROCEDURE [dbo].[usp_GetJobHistory] @DaysToPull INT
	,@Status VARCHAR(9) = NULL
	,@SortOrder VARCHAR(4) = NULL
	,@JobName VARCHAR(256) = NULL
	,@ReturnSql BIT = NULL
AS
BEGIN
	-- =============================================
	-- Author:      Bloggins
	-- Create date: 20170420
	-- Description: <Query to retrieve job history bypassing SSMS inbuilt viewer>
	-- Website: https://techtidbytes.wordpress.com/
	-- =============================================
	SET NOCOUNT ON

	BEGIN TRY
		-- Variable Declarations 
		DECLARE @PreviousDate DATETIME
		DECLARE @Year VARCHAR(4)
		DECLARE @Month VARCHAR(2)
		DECLARE @MonthPre VARCHAR(2)
		DECLARE @Day VARCHAR(2)
		DECLARE @DayPre VARCHAR(2)
		DECLARE @FinalDate INT
		DECLARE @StatusClause AS VARCHAR(255)
		DECLARE @Sql AS VARCHAR(MAX)

		IF @SortOrder IS NULL
			OR (
				@SortOrder <> 'ASC'
				AND @SortOrder <> 'DESC'
				)
		BEGIN
			SET @SortOrder = 'ASC'
		END

		IF @Status = 'All'
			OR @Status IS NULL
			OR (
				@Status <> 'All'
				AND @Status <> 'Failed'
				AND @Status <> 'Succeeded'
				AND @Status <> 'Retried'
				AND @Status <> 'Cancelled'
				)
		BEGIN
			SET @StatusClause = '0, 1, 2, 3'
		END
		ELSE IF @Status = 'Failed'
		BEGIN
			SET @StatusClause = '0'
		END
		ELSE IF @Status = 'Succeeded'
		BEGIN
			SET @StatusClause = '1'
		END
		ELSE IF @Status = 'Retried'
		BEGIN
			SET @StatusClause = '2'
		END
		ELSE IF @Status = 'Cancelled'
		BEGIN
			SET @StatusClause = '3'
		END

		SET @PreviousDate = DATEADD(dd, - @DaysToPull, GETDATE())
		SET @Year = DATEPART(yyyy, @PreviousDate)

		SELECT @MonthPre = CONVERT(VARCHAR(2), DATEPART(mm, @PreviousDate))

		SELECT @Month = RIGHT(CONVERT(VARCHAR, (@MonthPre + 1000000000)), 2)

		SELECT @DayPre = CONVERT(VARCHAR(2), DATEPART(dd, @PreviousDate))

		SELECT @Day = RIGHT(CONVERT(VARCHAR, (@DayPre + 1000000000)), 2)

		SET @FinalDate = CAST(@Year + @Month + @Day AS INT)
		/*RunDuration FORMAT (DD:HH:MM:SS)*/
		SET @Sql = '
SELECT ROW_NUMBER() OVER(ORDER BY h.instance_id ' + @SortOrder + ' ) AS Row
	,h.Server AS ServerName
	,s.database_name AS DbName
	,j.name AS JobName
	,s.step_name AS StepName
	,h.step_id AS StepId
	,CASE 
		WHEN h.run_status = 0
			THEN ''Failed''
		WHEN h.run_status = 1
			THEN ''Succeeded''
		WHEN h.run_status = 2
			THEN ''Retried''
		WHEN h.run_status = 3
			THEN ''Cancelled''
		END AS RunStatus
	,MSDB.DBO.AGENT_DATETIME(h.run_date, h.run_time) AS RunTime
	,STUFF(STUFF(STUFF(RIGHT(REPLICATE(''0'', 8) + CAST(h.run_duration AS VARCHAR(8)), 8), 3, 0, '':''), 6, 0, '':''), 9, 0, '':'') AS RunDuration
	,h.sql_severity AS SqlSeverity
	,h.message AS MessageReturned
FROM msdb.dbo.sysjobhistory h
INNER JOIN msdb.dbo.sysjobs j ON h.job_id = j.job_id
INNER JOIN msdb.dbo.sysjobsteps s ON j.job_id = s.job_id
	AND h.step_id = s.step_id
WHERE h.run_status IN (' + @StatusClause + ')
	AND h.run_date > ' + CONVERT(VARCHAR(255), @FinalDate)

		IF @JobName IS NOT NULL
		BEGIN
			SET @Sql = @Sql + '
AND j.name LIKE ' + '''' + '%' + @JobName + '%' + '''' + ''
		END

		SET @Sql = @Sql + '
ORDER BY h.instance_id ' + @SortOrder

		IF @ReturnSql = 1
		BEGIN
			PRINT (@Sql)
		END
		ELSE
		BEGIN
			EXEC (@Sql)
		END
	END TRY

	BEGIN CATCH
		PRINT 'error!'

		DECLARE @error_number AS INTEGER
		DECLARE @error_message AS VARCHAR(400)

		SET @error_number = error_number()
		SET @error_message = left(error_message(), 400)

		PRINT 'error_message: ' + @error_message
	END CATCH
END

How to use a while Loop to Iterate through Databases

Say you have code you want executed against every database on a SQL Server instance, you could use SQL Server’s inbuilt sp_MSForEachDB. I’m not a big fan of it though because it is undocumented, so I’d always be concerned Microsoft might decide to kill it with any given patch or service pack update. (I know the likelihood of that is extremely low but I’m a risk adverse kinda guy)

I prefer to use the example below. It may not be the most efficient snippet of code available on the net but it’s good and simple and it’s not going anywhere unless I drop it.

SET NOCOUNT ON

DECLARE @Database TABLE (DbName SYSNAME)
DECLARE @DbName AS SYSNAME

SET @DbName = ''

INSERT INTO @Database (DbName)
SELECT NAME
FROM master.dbo.sysdatabases
WHERE NAME <> 'tempdb'
ORDER BY NAME ASC

WHILE @DbName IS NOT NULL
BEGIN
	SET @DbName = (
			SELECT MIN(DbName)
			FROM @Database
			WHERE DbName > @DbName
			)

	/*
	PUT CODE HERE
	EXAMPLE PRINT Database Name
	*/
	PRINT @DbName
END

How to clear SQL Server backup and job history

SQL Server makes a record of each backup and restore it performs. This information is recorded across the following tables:

msdb.dbo.backupfile
msdb.dbo.backupfilegroup
msdb.dbo.backupmediafamily
msdb.dbo.backupmediaset
msdb.dbo.backupset
msdb.dbo.restorefile
msdb.dbo.restorefilegroup
msdb.dbo.restorehistory

There is no cleanup process that automatically clears these records down so as you can imagine over time it could grow to be substantial.

Before you upgrade a SQL Server instance if any of these tables exceeds 10,000 rows this can cause an upgrade to hang so it is recommended the older records be deleted.

To reduce rows in all of these tables, you can run the sp_delete_backuphistory stored procedure. This procedure deletes the entries in all of the backup and restore history tables for backup sets older than a specified date. An example is shown below:

EXEC msdb..sp_delete_backuphistory ‘20161201 12:00:00.000’

Similarly to remove records from msdb.dbo.sysjobhistory you can run the following example as shown below:

EXEC msdb..sp_purge_jobhistory @oldest_date = ‘20161201 12:00:00.000’

It is recommended you set up a job to clear these tables however how often you clear them requires some thought. For instance you may want to keep, or archive, the job history for performance comparisons over time or as a record of job failures.

Another example is I use a stored procedure that automatically generates TSQL to perform restores. This stored procedure uses the records in the backup history tables to generate the TSQL. Therefore I need to be careful I do not delete records that are within our backup file retention time period otherwise I would not be able to generate a restore script for those backup files using the stored procedure.

You might be reliant on these tables as well in some way so keep this in mind.

How to size transaction log files appropriately and reduce the number of virtual log files

Sizing transaction log files and resolving the issue of having too many virtual log files may or may not be caused by the same problem but both have similar solutions revolving around the same steps. As such the script provide below can resolve both but the application of the script differs slightly.

Appropriately sizing the Transaction Log (TLog) File

Ideally, TLog files should be created at their desired size at installation rather than being allowed to grow slowly over time.

Typically though the DBA will not know how large or frequent the transactions will be prior to the database becoming active in the production environment.

Or the DBA might be new to the organisation and may not be able to tell if the TLog file size is appropriate for the typical transactions the database handles or is the result of some abnormally large transactions or failed TLog backups.

All this can of course be investigated somewhat but if the database transactions are in the kilobytes and transaction frequency is low then you could also take the pragmatic approach outlined below:

  1. Backup the log until it gets truncated
  2. Shrink the log as much as possible
  3. Allow the file to grow
  4. Record the maximum size of the log
  5. Backup the log until it gets truncated
  6. Shrink the log as much as possible
  7. Manually expand the log to the maximum size recorded in step 4

Carrying out the steps above will also resolve any issues with excessive Virtual Log Files. This is dealt with in more detail below.

The script provided at the end of this article can be utilized to achieve the steps above, see the instructions “How to use the script” below.

Before proceeding with this approach however ensure the following points have been considered as these will minimize the impact of growing the files on the databases:

  • That the autogrowth increment for the transaction log is suitably large, @DefaultFileGrowth is set at 128 Mb. This value or higher is recommended.
  • Appropriate restriction settings for growth, preferably unrestricted
  • Instant file initialization is on.

Review the following article to set up instant file initialization.

https://www.brentozar.com/blitz/instant-file-initialization/

You may already know that the TLog file has been appropriately sized but there are too many VLFs in which case the following section outlines how to resolve VLFs as an issue.

Dealing with Virtual Log Files (VLFs)

Each Transaction Log (TLog) file is divided logically into smaller segments called VLFs. VLFs are the unit of truncation for the TLog. When a VLF no longer contains log records for active transactions, it can be truncated and the space becomes available to log new transactions.

SQL Server should not have an excessive number of VLFs inside the TLog. Having a large number of small VLFs can cause the following problems:

  • Slow down the recovery process on startup or after restoring a backup
  • Slow down TLog backups
  • Affect insert/update/delete performance

What constitutes too many VLFs as a problem is subjective however Microsoft flags 1000+ VLFs as an issue.

The number of VLFs can grow based on the autogrowth settings for the TLog file and how often active transactions are written to disk. The problem of too many VLFs is often the result of accepting the default settings for initial log file size and autogrowth increment when the database is created.

To check how many VLFs a database has run the below command against the database:

DBCC LOGINFO;

Each row returned represents a VLF.

To resolving too many VLFs unfortunately requires some downtime. The fix is to shrink and regrow the log file. This is a blocking operation and I would suggest once complete you run a full backup on the database\s as the shrink will break the continuity of the Full and TLog backup schedule which will cause the TLog backups to fail.

Taking the following steps will reduce the number of VLFs in the TLog of a database:

  1. Record the maximum size of the log
  2. Backup the log until it gets truncated
  3. Shrink the log as much as possible
  4. Manually expand the log to the maximum size recorded in the first step

The script provided at the end of this article can be utilized to achieve the steps above, see the instructions “How to use the script” below.

How to use the script:

The script below will output the TSQL to run against all the databases in your instance. If you want to address every database on the instance run the entire script during scheduled downtime. To run against a specific database simply run the snippet that relates to that database.

The role of Backups

The TLog needs to be backed up and truncated prior to the running the shrink to get the file as small as possible.

Backup Option 1:

The script will backup the TLog\s if you provide a value for the @BackupLocation variable. This is defaulted to ‘C:\SQL_Server\Backup\’.

Backup Option 2: 

Alternatively you can set  @BackupLocation = NULL and run your own TLog backup maintenance plan or maintenance script prior to running the script.

Future Proofing by adding a margin for growth:

The script variable @Margin will allow you to set a percentage of how much larger you want the new TLog file size base on the current size. So for a database with a current TLog file size of 100 Mb, running the script with @Margin = 10 will output TSQL to change the TLog file size to 110 Mb. This has increased the TLog File size by 10%, providing 10% additional disk space before the TLog would need to grow again.

Appropriate TLog File Size Instructions:

Once you have backups arranged using one the options above set the variable @LogSizeMb = ‘128’. And run the script. This will start the TLog file size at 128 Mb.

Take the TSQL Generated and run the snippets needed for particular databases or run the entire script.

Let the file grow over a period that is representative of repeated database transcation activity.

When the TLog file is no longer growing you have found the natural size of the TLog for that database.

Run the script again this time setting @LogSizeMb = 0 and if appropriate provide a value for @Margin. The TSQL generated will then use the current TLog file size as is, plus the optional margin.

Run the snippets for the required database\s.

Dealing with Virtual Log Files (VLFs) Instructions:

Once you have backups arranged using one the options above, ensure that the variable @LogSizeMb is set to zero and if appropriate provide a value for @Margin. Then the script can be run to generate the TSQL needed to shrink and resize the TLog file size reducing the number of VLFs.

Run the snippets for the required database\s.

USE master;
GO

SET QUOTED_IDENTIFIER ON
GO

SET ARITHABORT ON
GO

DECLARE @DbName NVARCHAR(255)
	,@LogicalFileName NVARCHAR(255)
	,@DBRecoveryDesc VARCHAR(200)
	,@LogSizeMb INT
	,@DefaultFileGrowth VARCHAR(10)
	,@BackupLocation VARCHAR(255)
	,@GetDate VARCHAR(23)
	,@Margin AS DECIMAL(5, 2)

SELECT @Margin = 0
	,@LogSizeMb = 0
	,@BackupLocation = 'C:\SQL_Server\Backup\'
	,@DefaultFileGrowth = '128'
	,@GetDate = CONVERT(VARCHAR(23), GETDATE(), 126)
	,@GetDate = REPLACE(REPLACE(REPLACE(REPLACE(@GetDate, '-', ''), ':', ''), 'T', '_'), '.', '')

DECLARE DatabaseList CURSOR
FOR
SELECT d.NAME
	,d.recovery_model_desc AS DBRecoveryDesc
	,CASE 
		WHEN @LogSizeMb > 0
			THEN @LogSizeMb
		ELSE CEILING((mf.size * 8.0) / 1024) + (CEILING(((mf.size * 8.0) / 1024) * (@Margin / 100)))
		END AS LogSizeMb
	,@BackupLocation AS BackupLocation
	,@DefaultFileGrowth AS DefaultFileGrowth
FROM sys.databases AS d
LEFT JOIN sys.master_files AS mf ON d.database_Id = mf.database_Id
WHERE d.state_desc = 'ONLINE'
	AND d.is_read_only = 0
	AND d.database_id > 4
	AND type_desc = 'LOG'
ORDER BY NAME

OPEN DatabaseList

FETCH NEXT
FROM DatabaseList
INTO @DbName
	,@DBRecoveryDesc
	,@LogSizeMb
	,@BackupLocation
	,@DefaultFileGrowth

WHILE @@FETCH_STATUS = 0
BEGIN
	SET @LogicalFileName = (
			SELECT TOP 1 NAME
			FROM sys.master_files AS mf
			WHERE DB_NAME(database_id) = @DbName
				AND type_desc = 'LOG'
			)

	IF @DBRecoveryDesc = 'Full'
		AND @BackupLocation IS NOT NULL
	BEGIN
		PRINT (
				'Use [' + @DbName + '] 
            GO  
			
			BACKUP LOG [' + @DbName + '] TO DISK=''' + @BackupLocation + @DbName + '_' + @GetDate + '_Log.trn' + ''';
			GO
			
            ALTER DATABASE [' + @DbName + '] SET RECOVERY SIMPLE WITH NO_WAIT
            GO   

            DBCC SHRINKFILE (''' + @LogicalFileName + ''',10)  
            GO  
            
            ALTER DATABASE [' + @DbName + '] MODIFY FILE (NAME=''' + @LogicalFileName + ''', SIZE=' + CONVERT(VARCHAR(10), @LogSizeMb) + 'MB, MAXSIZE=UNLIMITED, FILEGROWTH=' + @DefaultFileGrowth + 'MB);
            GO

            ALTER DATABASE [' + @DbName + '] SET RECOVERY FULL WITH NO_WAIT
            GO '
				)
		PRINT '----------------------------------------------------------- '
	END
	ELSE IF @DBRecoveryDesc = 'Full'
	BEGIN
		PRINT (
				'Use [' + @DbName + '] 
            GO  
					
            ALTER DATABASE [' + @DbName + '] SET RECOVERY SIMPLE WITH NO_WAIT
            GO   

            DBCC SHRINKFILE (''' + @LogicalFileName + ''',10)  
            GO  
            
            ALTER DATABASE [' + @DbName + '] MODIFY FILE (NAME=''' + @LogicalFileName + ''', SIZE=' + CONVERT(VARCHAR(10), @LogSizeMb) + 'MB, MAXSIZE=UNLIMITED, FILEGROWTH=' + @DefaultFileGrowth + 'MB);
            GO

            ALTER DATABASE [' + @DbName + '] SET RECOVERY FULL WITH NO_WAIT
            GO '
				)
		PRINT '----------------------------------------------------------- '
	END
	ELSE IF @DBRecoveryDesc = 'Simple'
	BEGIN
		PRINT (
				'Use [' + @DbName + ']  
            GO  

            DBCC SHRINKFILE (''' + @LogicalFileName + ''',10)    
            GO
            
            ALTER DATABASE [' + @DbName + '] MODIFY FILE (NAME=''' + @LogicalFileName + ''', SIZE=' + CONVERT(VARCHAR(10), @LogSizeMb) + 'MB, MAXSIZE=UNLIMITED, FILEGROWTH=' + @DefaultFileGrowth + 'MB);
            GO
            '
				)
		PRINT '----------------------------------------------------------- '
	END

	FETCH NEXT
	FROM DatabaseList
	INTO @DbName
		,@DBRecoveryDesc
		,@LogSizeMb
		,@BackupLocation
		,@DefaultFileGrowth
END

CLOSE DatabaseList

DEALLOCATE DatabaseList