SQL Server Trace Flags


Trace flags are required to modify Session and Server state very helpful when dealing with system Diagnostics and system maintenance. Trace flags are also dangerous so always think what do you need to enable or disable a specific trace flag.

Enabling and Disabling Trace flags can be done using multiple ways, SQL Server Configuration Manager, DBCC or Command line.

to enable trace flag using DBCC (my Favourite) use the following command

DBCC TRACEON (3604)

To Disable trace flag:

DBCC TRACEOFF (3604)

To Check for trace flag Status

DBCC TRACESTATUS(3604)

There are 2 Different types of Trace Flags Documented , and None Documented

Below are a list of the most important ones

    • Trace Flag 610

      •Trace flag 610 controls minimally logged inserts into indexed tables

      •Allows for high volume data loading

      •Less information is written to the transaction log

      •Transaction log file size can be greatly reduced

      •Introduced in SQL Server 2008

      •“Very fussy”

      •Documented:

      •Data Loading Performance Guide white paper

      http://msdn.microsoft.com/en-us/library/dd425070(v=sql.100).aspx

      DBA-

    • Trace Flag 834

      •Trace flag 834 allows SQL Server 2005 to use large-page allocations for the memory that is allocated for the buffer pool.

      •May prevent the server from starting if memory is fragmented and if large pages cannot be allocated

      •Best suited for servers that are dedicated to SQL Server 2005

      •Page size varies depending on the hardware platform

      •Page size varies from 2 MB to 16 MB.

      •Improves performance by increasing the efficiency of the translation look-aside buffer (TLB) in the CPU

      •Only applies to 64-bit architecture

      •Startup

    • Trace Flag 835

      •Trace flag 835 enables “Lock Pages in Memory” support for SQL Server Standard Edition

      •Enables SQL Server to use AWE APIs for buffer pool allocation

      •Avoids potential performance issues due to trimming working set

      http://msdn.microsoft.com/en-us/library/ms187499.aspx

      •Introduced in:

      •SQL Server 2005 Service pack 3 Cumulative Update 4

      •SQL Server 2008 Service Pack 1 Cumulative Update 2

      •Only applies to 64-bit architecture

      •Startup

    • Trace Flag 1211

      •Trace flag 1211 disables lock escalation based on memory pressure or number of locks

      •Database engine will not escalate row or page locks to table locks

      •Scope: Global | Session

      •Documented: BOL

      •Trace flag 1211 takes precedence over 1224

      •Microsoft recommends using 1224

      •Trace flag 1211 prevents escalation in every case, even under memory pressure

      •Helps avoid "out-of-locks" errors when many locks are being used.

      •Can generate excessive number of locks

      •Can slow performance

      •Cause 1204 errors

    • Trace Flag 3226

      •Trace flag 3226 prevents successful back operations from being logged

      •By default SQL Server logs every successful backup operation to the ERRORLOG and the System event log

      •Frequent backup operations can cause log files to grow and make finding other messages harder

      •Documented: BOL

    • Trace Flag 4199 /* IMPORTANT */

      •Trace flag 4199 enables all the fixes that were previously made for the query processor under many trace flags

      •Policy:

      •Any hotfix that could potentially affect the execution plan of a query must be controlled by a trace flag

      •Except for fixes to bugs that can cause incorrect results or corruption

      •Helps avoid unexpected changes to the execution plan

      •Which means that virtually everyone is not necessarily running SQL Server with all the latest query processor fixes enabled

      •Scope: Session | Global

      •Documented: KB974006

      •Consider enabling for “virgin” SQL Server deployments?

      •Microsoft are strongly advising not to enable this trace flag unless you are affected

There is more you can find on MSDN site for more info but for me these the the most Important DOCUMENTED Trace flags

for undocumented trace flags I didn’t try any my self but here you can find some of what you are looking for

Trace Flag 3004

Most Database Administrators are aware of instant file initialization. In a nutshell, when instant file initialization is enabled the data files do not need to be zeroed out during creation. This can save an incredible amount of time during the restoration of VLDBs. As you can imagine, the zeroing out of a 1 TB data file can take a very long time.

Trace flag 3004 turns on information regarding instant file initialization. Enabling this trace flag will not make this information available to view. You will still need to turn on trace flag 3605 to send this information to the error log.

Trace Flag 3014

Trace flag 3014 provides detailed information regarding the steps performed during the backup and restore process. Normally, SQL Server only provides a limited amount of information in the error log regarding these processes. By enabling this trace flag you’ll be able to see some very detailed and interesting information.

Trace Flag 3604

Trace flag 3604 can be used under a variety of circumstances. If you’ve ever used DBCC IND or DBCC PAGE then you’ve probably already used trace flag 3604. It simply informs SQL Server to send some DBCC output information to the screen instead of the error log. In many cases, you have to use this trace flag to see any output at all.

Trace Flag 3605

Trace flag 3605 will send some DBCC output to the error log. This trace flag needs to be enabled to see the instant file initialization information made available by trace flag 3004

Advertisements

What’s New in T-SQL SQL Server 2012–Result Set


Lets Build a Scenario,

Imagine your need to Standardize your Stored procedure output for dynamic batching. as a matter of fact if the output of your Stored procedure is different from what you expect your query will fail.

So you need to standardize your output from stored procedure. to Do that lets welcome the new SQL Server 2012 T-SQL Feature Result Set.

Result Set will let the format the stored procedure output to map your requirements no matter how the data is formatted internally. Lets take a look using the following code sample

EXECUTE <batch_or_proc> WITH <result_sets_option>;

Sample Code for that is

EXECUTE GetEmployees WITH RESULT SETS(
(

EmployeeId INT,

EmployeeName VARCHAR(150)

)

)

Some notes to consider when using Result Set

  • Sometimes if you want to restrict a stored procedure to return a result set you can use the RESULT SETS NONE clause.
  • The WITH RESULT SETS option cannot be specified in an INSERT…EXEC statement.
  • The number of columns being returned as part of result set cannot be changed

What’s News In T-SQL Server 2012–Series


With the release of SQL 2012 a lot of modifications has been added to T-SQL to help Developers and DBA improve the performance of their queries. But Before we start we need to state that the News SSMS (SQL Server Management Studio) is now based on the engine of Visual Studio, Same look and feel and the powerful features of Visual Studio now will help you write and manage  your queries.

The Intellisense improved a lot in this version of SQL Server which will save you a lot of time.

Offset and Fetch

In Earlier versions of SQL Server in order to create paging in your application you need to handle this through your Datagrids or though a complex – performance impacting Stored Procedures.

In 2012 a New TSQL functions has been introduced to improve your paging methodology.

for example to get the next 10 rows in Employee table you can use the following command.

–Leave first 10 rows and Fetch next 5 rows

SELECT ProductID, Name

FROM AdventureWorks.Production.Product

ORDER BY NAME

OFFSET 10 ROWS

FETCH NEXT 5 ROWS ONLY

Retrieve the list of Top executed SQL Queries


You will come to a point were you need to know more about your currently running queries that consume a huge amount of memory and physical I/O. Below are a list of Queries that I use to get the list of Executed queries from different perspectives

Top Executed Queries

   1: -- Get Top 100 executed SP's ordered by execution count

   2: SELECT TOP 100 qt.text AS 'SP Name', qs.execution_count AS 'Execution Count',

   3: qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second',

   4: qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',

   5: qs.total_worker_time AS 'TotalWorkerTime',

   6: qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime',

   7: qs.max_logical_reads, qs.max_logical_writes, qs.total_physical_reads,

   8: DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache'

   9: FROM sys.dm_exec_query_stats AS qs

  10: CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

  11: WHERE qt.dbid = db_id() -- Filter by current database

  12: ORDER BY qs.execution_count DESC

  13:  

Top (CPU Usage) Queries

   1: -- Get Top 20 executed SP's ordered by total worker time (CPU pressure)

   2: SELECT TOP 20 qt.text AS 'SP Name', qs.total_worker_time AS 'TotalWorkerTime',

   3: qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',

   4: qs.execution_count AS 'Execution Count',

   5: ISNULL(qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()), 0) AS 'Calls/Second',

   6: ISNULL(qs.total_elapsed_time/qs.execution_count, 0) AS 'AvgElapsedTime',

   7: qs.max_logical_reads, qs.max_logical_writes,

   8: DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache'

   9: FROM sys.dm_exec_query_stats AS qs

  10: CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

  11: WHERE qt.dbid = db_id() -- Filter by current database

  12: ORDER BY qs.total_worker_time DESC

Top (Logical Reads) Queries

   1: -- Get Top 20 executed SP's ordered by logical reads (memory pressure)

   2: SELECT TOP 20 qt.text AS 'SP Name', total_logical_reads,

   3: qs.execution_count AS 'Execution Count', total_logical_reads/qs.execution_count AS 'AvgLogicalReads',

   4: qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second',

   5: qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',

   6: qs.total_worker_time AS 'TotalWorkerTime',

   7: qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime',

   8: qs.total_logical_writes,

   9: qs.max_logical_reads, qs.max_logical_writes, qs.total_physical_reads,

  10: DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache', qt.dbid

  11: FROM sys.dm_exec_query_stats AS qs

  12: CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

  13: WHERE qt.dbid = db_id() -- Filter by current database

  14: ORDER BY total_logical_reads DESC

Connecting to Power Pivot Excel Sheets from Performance Point Services 2010


I always thought that the new BI Symantec Model is disconnected from Performance Point then I came across an article on MSDN Performance Point Blog.

http://blogs.msdn.com/b/performancepoint/archive/2011/12/21/will-sql-server-quot-denali-quot-models-work-as-performancepoint-data-sources.aspx

This will allow you to connect models that the end Users Build and Publish it to Performance Point as data source for your Dashboards

Read the below Article

Since SQL Server Denali hit the public download space, several people have tried using it to create PowerPivot models as data sources for PerformancePoint 2010. And why shouldn’t they? PerformancePoint works just fine in the scenario where you specify the reference to a PowerPivot model in the data source connection string for an Analysis Services data source.

The Problem

But just about every one of those cutting-edged users has wisely pointed out that something fundamentally breaks when trying this same scenario with a Denali RC0-based PowerPivot workbook. There are a couple of reasons for this, which I address below. The error that manifests in the event log indicates that the existing, installed provider is too old. And that’s true if you’re running a standard SharePoint 2010 SP1 installation. The error most people have been seeing is this:

Exception details: Microsoft.AnalysisServices.AdomdClient.AdomdUnknownResponseException: Unsupported data format : application/vnd.ms-excel.12

The good news is that a workaround exists.

The Solution

The easiest way to make this scenario work is to install the ADOMD.Net (version 10.5) data provider on your SharePoint farm. You can get this by downloading it from the SQL Server 2008 R2 SP1 Feature Pack (make sure that you download the version appropriate for your server architecture). Version 10.5 of this provider is backward compatible with the new version 11.0. And because it’s a minor version, it will overwrite the version 10.0 provider, which is the major version that PerformancePoint looks for at runtime. This latter notion explains the alert that the installer throws when you begin the installation. Go ahead and click, "Yes".

The Detail

I will use the example of a 1M farm in this case. SharePoint’s pre-requisite installer will install both the 2005 and 2008 versions of ADOMD.Net. This ensures that Analysis Services data connections work smoothly out of the box. If you look in add/remove programs on your SharePoint server, you can see the provider versions. Version 10.1 is the one that PerformancePoint will use when connecting to the PowerPivot workbook when you use it as a data source.

When you install SQL Server Denali, it will install the version 11.0 ADOMD data provider. You can see this as well by looking at the list of assemblies on the server.

The problem is that there is no policy telling the request to use the latest version of the provider, and PerformancePoint is looking specifically for major version 10. So, in effect, version 11.0 is sitting there all alone and unused. However, when you install the ADOMD client version 10.5, things work because that version is compatible with the latest PowerPivot models, and PerformancePoint is still happy because it sees version 10, just like it expects to. Unlike major versions of the provider, which exist side-by-side, minor versions overwrite each other, which is why PerformancePoint grabs the right one even after you overwrite version 10.1 with version 10.5. So, after installing the SQL Server 2008 R2 SP1 update, when you look at the list in add/remove programs, you will see this:

Once you have updated your provider, PerformancePoint should be able to connect to the PowerPivot workbook as you would expect.

There are a couple of parting notes here. First, notice that the name of the cube, as interpreted by PerformancePoint, is now called "Model". You may recall that it was called "Sandbox" for PowerPivot 2010 models. Second, most of the published documentation indicates that you need to add the PROVIDER property to the PerformancePoint connection string, like this: PROVIDER=MSOLAP;DATA SOURCE=http://contoso/Documents/PowerPivot_Sample.xlsx. This string is a little redundant; there is no need to set the provider property because the provider is adomd.net. And adomd.net will create the right data access object without the user having to specify MSOLAP to connect to the data source.

Performance Monitoring Important Counters things you need to keep an Eye on


I believe this will be a long thread of posts discussing each one of the below Counters and why do you need to know more about them.

Memory – Available MBytes
Paging File – % Usage
Physical Disk – Avg. Disk sec/Read
Physical Disk – Avg. Disk sec/Write
Physical Disk – Disk Reads/sec
Physical Disk – Disk Writes/sec
Processor – % Processor Time
SQLServer: Buffer Manager – Buffer cache hit ratio
SQLServer: Buffer Manager – Page life expectancy
SQLServer: General Statistics – User Connections
SQLServer: Memory Manager – Memory Grants Pending
SQLServer: SQL Statistics – Batch Requests/sec
SQLServer: SQL Statistics – Compilations/sec
SQLServer: SQL Statistics – Recompilations/sec
System – Processor Queue Length

2. Collecting Data Files Info


The Second Step You need to handle when running analysis on a SQL Server. Is your Storage. 90% of your Problems are coming from your I/O WHY ? Storage is still the only mechanical part on your server. you can read pages 100 MB/s from the fastest HDD Available. you need to know the load that your Data files have to see how can you partition them on Different Storage Drives.

 

   1: Select DB_NAME(states.database_id) As DatabaseName,

   2: states.file_id,

   3: Files.physical_name,

   4: num_of_reads,

   5: num_of_bytes_read,

   6: (io_stall_read_ms/1000) As io_stall_read,

   7: num_of_writes,

   8: num_of_bytes_written,

   9: io_stall/1000 as io_stall,

  10: io_stall_write_ms/1000 as io_stall_write,

  11: (size_on_disk_bytes/1024)/1024 as SizeInMBs

  12:   From Sys.dm_io_virtual_file_stats(NULL,NULL) As states Inner Join sys.master_files As Files on (states.file_id = Files.file_id

  13:   And states.database_id = Files.database_id)

  14:   Order BY DatabaseName

The Above Query will allow you to get the Database files and the number of read and writes against the file and the amount of bytes read and wrote to this file

 

Some things you need to keep in mind

1. The Stall is the amount of time the file waiting in Queue to write our read from the data file. the Higher stall the more problems you have

2. The DMVs starts collecting the data from the moment you start the SQL Service and counters will reset once you restart