What’s News In T-SQL Server 2012–Series


With the release of SQL 2012 a lot of modifications has been added to T-SQL to help Developers and DBA improve the performance of their queries. But Before we start we need to state that the News SSMS (SQL Server Management Studio) is now based on the engine of Visual Studio, Same look and feel and the powerful features of Visual Studio now will help you write and manage  your queries.

The Intellisense improved a lot in this version of SQL Server which will save you a lot of time.

Offset and Fetch

In Earlier versions of SQL Server in order to create paging in your application you need to handle this through your Datagrids or though a complex – performance impacting Stored Procedures.

In 2012 a New TSQL functions has been introduced to improve your paging methodology.

for example to get the next 10 rows in Employee table you can use the following command.

–Leave first 10 rows and Fetch next 5 rows

SELECT ProductID, Name

FROM AdventureWorks.Production.Product

ORDER BY NAME

OFFSET 10 ROWS

FETCH NEXT 5 ROWS ONLY

Retrieve the list of Top executed SQL Queries


You will come to a point were you need to know more about your currently running queries that consume a huge amount of memory and physical I/O. Below are a list of Queries that I use to get the list of Executed queries from different perspectives

Top Executed Queries

   1: -- Get Top 100 executed SP's ordered by execution count

   2: SELECT TOP 100 qt.text AS 'SP Name', qs.execution_count AS 'Execution Count',

   3: qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second',

   4: qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',

   5: qs.total_worker_time AS 'TotalWorkerTime',

   6: qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime',

   7: qs.max_logical_reads, qs.max_logical_writes, qs.total_physical_reads,

   8: DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache'

   9: FROM sys.dm_exec_query_stats AS qs

  10: CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

  11: WHERE qt.dbid = db_id() -- Filter by current database

  12: ORDER BY qs.execution_count DESC

  13:  

Top (CPU Usage) Queries

   1: -- Get Top 20 executed SP's ordered by total worker time (CPU pressure)

   2: SELECT TOP 20 qt.text AS 'SP Name', qs.total_worker_time AS 'TotalWorkerTime',

   3: qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',

   4: qs.execution_count AS 'Execution Count',

   5: ISNULL(qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()), 0) AS 'Calls/Second',

   6: ISNULL(qs.total_elapsed_time/qs.execution_count, 0) AS 'AvgElapsedTime',

   7: qs.max_logical_reads, qs.max_logical_writes,

   8: DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache'

   9: FROM sys.dm_exec_query_stats AS qs

  10: CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

  11: WHERE qt.dbid = db_id() -- Filter by current database

  12: ORDER BY qs.total_worker_time DESC

Top (Logical Reads) Queries

   1: -- Get Top 20 executed SP's ordered by logical reads (memory pressure)

   2: SELECT TOP 20 qt.text AS 'SP Name', total_logical_reads,

   3: qs.execution_count AS 'Execution Count', total_logical_reads/qs.execution_count AS 'AvgLogicalReads',

   4: qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second',

   5: qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',

   6: qs.total_worker_time AS 'TotalWorkerTime',

   7: qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime',

   8: qs.total_logical_writes,

   9: qs.max_logical_reads, qs.max_logical_writes, qs.total_physical_reads,

  10: DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache', qt.dbid

  11: FROM sys.dm_exec_query_stats AS qs

  12: CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

  13: WHERE qt.dbid = db_id() -- Filter by current database

  14: ORDER BY total_logical_reads DESC

Connecting to Power Pivot Excel Sheets from Performance Point Services 2010


I always thought that the new BI Symantec Model is disconnected from Performance Point then I came across an article on MSDN Performance Point Blog.

http://blogs.msdn.com/b/performancepoint/archive/2011/12/21/will-sql-server-quot-denali-quot-models-work-as-performancepoint-data-sources.aspx

This will allow you to connect models that the end Users Build and Publish it to Performance Point as data source for your Dashboards

Read the below Article

Since SQL Server Denali hit the public download space, several people have tried using it to create PowerPivot models as data sources for PerformancePoint 2010. And why shouldn’t they? PerformancePoint works just fine in the scenario where you specify the reference to a PowerPivot model in the data source connection string for an Analysis Services data source.

The Problem

But just about every one of those cutting-edged users has wisely pointed out that something fundamentally breaks when trying this same scenario with a Denali RC0-based PowerPivot workbook. There are a couple of reasons for this, which I address below. The error that manifests in the event log indicates that the existing, installed provider is too old. And that’s true if you’re running a standard SharePoint 2010 SP1 installation. The error most people have been seeing is this:

Exception details: Microsoft.AnalysisServices.AdomdClient.AdomdUnknownResponseException: Unsupported data format : application/vnd.ms-excel.12

The good news is that a workaround exists.

The Solution

The easiest way to make this scenario work is to install the ADOMD.Net (version 10.5) data provider on your SharePoint farm. You can get this by downloading it from the SQL Server 2008 R2 SP1 Feature Pack (make sure that you download the version appropriate for your server architecture). Version 10.5 of this provider is backward compatible with the new version 11.0. And because it’s a minor version, it will overwrite the version 10.0 provider, which is the major version that PerformancePoint looks for at runtime. This latter notion explains the alert that the installer throws when you begin the installation. Go ahead and click, "Yes".

The Detail

I will use the example of a 1M farm in this case. SharePoint’s pre-requisite installer will install both the 2005 and 2008 versions of ADOMD.Net. This ensures that Analysis Services data connections work smoothly out of the box. If you look in add/remove programs on your SharePoint server, you can see the provider versions. Version 10.1 is the one that PerformancePoint will use when connecting to the PowerPivot workbook when you use it as a data source.

When you install SQL Server Denali, it will install the version 11.0 ADOMD data provider. You can see this as well by looking at the list of assemblies on the server.

The problem is that there is no policy telling the request to use the latest version of the provider, and PerformancePoint is looking specifically for major version 10. So, in effect, version 11.0 is sitting there all alone and unused. However, when you install the ADOMD client version 10.5, things work because that version is compatible with the latest PowerPivot models, and PerformancePoint is still happy because it sees version 10, just like it expects to. Unlike major versions of the provider, which exist side-by-side, minor versions overwrite each other, which is why PerformancePoint grabs the right one even after you overwrite version 10.1 with version 10.5. So, after installing the SQL Server 2008 R2 SP1 update, when you look at the list in add/remove programs, you will see this:

Once you have updated your provider, PerformancePoint should be able to connect to the PowerPivot workbook as you would expect.

There are a couple of parting notes here. First, notice that the name of the cube, as interpreted by PerformancePoint, is now called "Model". You may recall that it was called "Sandbox" for PowerPivot 2010 models. Second, most of the published documentation indicates that you need to add the PROVIDER property to the PerformancePoint connection string, like this: PROVIDER=MSOLAP;DATA SOURCE=http://contoso/Documents/PowerPivot_Sample.xlsx. This string is a little redundant; there is no need to set the provider property because the provider is adomd.net. And adomd.net will create the right data access object without the user having to specify MSOLAP to connect to the data source.

Performance Monitoring Important Counters things you need to keep an Eye on


I believe this will be a long thread of posts discussing each one of the below Counters and why do you need to know more about them.

Memory – Available MBytes
Paging File – % Usage
Physical Disk – Avg. Disk sec/Read
Physical Disk – Avg. Disk sec/Write
Physical Disk – Disk Reads/sec
Physical Disk – Disk Writes/sec
Processor – % Processor Time
SQLServer: Buffer Manager – Buffer cache hit ratio
SQLServer: Buffer Manager – Page life expectancy
SQLServer: General Statistics – User Connections
SQLServer: Memory Manager – Memory Grants Pending
SQLServer: SQL Statistics – Batch Requests/sec
SQLServer: SQL Statistics – Compilations/sec
SQLServer: SQL Statistics – Recompilations/sec
System – Processor Queue Length

2. Collecting Data Files Info


The Second Step You need to handle when running analysis on a SQL Server. Is your Storage. 90% of your Problems are coming from your I/O WHY ? Storage is still the only mechanical part on your server. you can read pages 100 MB/s from the fastest HDD Available. you need to know the load that your Data files have to see how can you partition them on Different Storage Drives.

 

   1: Select DB_NAME(states.database_id) As DatabaseName,

   2: states.file_id,

   3: Files.physical_name,

   4: num_of_reads,

   5: num_of_bytes_read,

   6: (io_stall_read_ms/1000) As io_stall_read,

   7: num_of_writes,

   8: num_of_bytes_written,

   9: io_stall/1000 as io_stall,

  10: io_stall_write_ms/1000 as io_stall_write,

  11: (size_on_disk_bytes/1024)/1024 as SizeInMBs

  12:   From Sys.dm_io_virtual_file_stats(NULL,NULL) As states Inner Join sys.master_files As Files on (states.file_id = Files.file_id

  13:   And states.database_id = Files.database_id)

  14:   Order BY DatabaseName

The Above Query will allow you to get the Database files and the number of read and writes against the file and the amount of bytes read and wrote to this file

 

Some things you need to keep in mind

1. The Stall is the amount of time the file waiting in Queue to write our read from the data file. the Higher stall the more problems you have

2. The DMVs starts collecting the data from the moment you start the SQL Service and counters will reset once you restart

Know your SQL Hardware Info


When It comes to database engine it’s not recommended by all means to allow admins and users to connect using Remote desktop to the database Server to do anything so By Default SQL Server includes a large set of DMV (Data Management Views) to help you with your required admin configurations

Below is for me 1 of the most important Queries that will give you an overview of your Server Hardware status

   1: Select 

   2: (cpu_ticks/1024)/1024 as CPU_Ticks_MB,

   3: (ms_ticks/1000) as Ticks_in_Sec,

   4: cpu_count,

   5: hyperthread_ratio,

   6: (physical_memory_in_bytes/1024)/1024 as RAM,

   7: (virtual_memory_in_bytes/1024)/1024 as Virtual_RAM,

   8: max_workers_count,

   9: os_priority_class,

  10: bpool_committed as MemoryCommitInBufferPool

  11:  

  12:  From

  13: sys.dm_os_sys_info