PDW–The Architecture


In Order to Talk about PDW you need to get to know what is MPP (Massive Parallel Processing of Data).

MPP

(Massively Parallel Processing or Massively Parallel Processor) A multiprocessing architecture that uses many processors and a different programming paradigm than the common symmetric multiprocessing (SMP) found in today’s computer systems.
Self-Contained MPP Subsystems
Each CPU is a subsystem with its own memory and copy of the operating system and application, and each subsystem communicates with the others via a high-speed interconnect. In order to use MPP effectively, an information processing problem must be breakable into pieces that can all be solved simultaneously. In scientific environments, certain simulations and mathematical problems can be split apart and each part processed at the same time. In the business world, a parallel data query (PDQ) divides a large database into pieces. For example, 26 CPUs could be used to perform a sequential search, each one searching one letter of the alphabet. To take advantage of more CPUs, the data have to be broken further into more parallel groups.
In contrast, adding CPUs in an SMP system increases performance in a more general manner. Applications that support parallel operations (multithreading) immediately take advantage of SMP, but performance gains are available to all applications simply because there are more processors. For example, four CPUs can be running four different applications.

Why ?

    Customers need more execute more queries against a large Database with the ability to retrieve more data faster and a single point of access is not enough

    Make sure the system is scalable with linear state when more data is added

image

The SQL Server Architecture

The SQL Server PDW Consists of at least 2 Appliances (Control Rack and Data Rack).

image

The Control Rack consists of six Nodes : two Management Nodes, two Control Nodes, the Landing Zone, and the Backup Node.  Storage Area Network’s (SAN) are also included for the Control Node, Landing Zone, and Backup Node.  Additionally, the control rack ships with dual Infiniband, Ethernet, and Fiber switches needed for the rack. 

•The Management Node:

–Is responsible for Management of Data Nodes and failover instances and new Data nodes monitoring Rack Status.

•The Control Node:

  • It is the Brain of the PDW Appliance it is responsible for managing queries to the Compute node inside the Data Rack and consolidate the result and return it to the Application.
  • It is the Node that manage the node that will host the insert operations of the compute node
  • So it’s really a brain

•The Landing Zone

  • It’s Staging Area has it’s own SAN Storage (around 1.8 TB of Data) and it holds the data
  • –It’s ETL Layer that is used to load the data into the appliance when Required.

•The Backup and restore Zone

  • This node is responsible for managing the backup and restore operations of the appliance.
  • Sending the Data to a disaster recovery site is a part of it’s responsibilities.

Because the appliance is designed to work out of the box, it includes its own Active Directory that is housed within the Management Node.  There are several reasons why PDW needs Active Directory, one of which is that we use Microsoft Clustering Services (MCS) within the appliance and MCS requires domain accounts for certain services to run.  Additionally the Management Node includes High Performance Computing (HPC) that is used during the initial install and for ease in management of the nodes within the appliance.

The Control Node is where user requests for data will enter and exit the appliance.  On the control nodes, queries will be parsed and then sent to compute nodes for processing.  Additionally, the metadata of the appliance and distributed databases is located here.  Essentially, the control node is the brains of the operation.  No persisted user data is located here, that all exists on the compute nodes within the data racks.  User data can be temporarily aggregated on the control node during query processing and then dropped after sent back to a client.

The Landing Zone is essentially a large file server with plenty of SAN storage to provide a staging ground for loading data into the appliance.  You will be able to load data either through the command line with DWLoader or through SSIS which now has an connector  for PDW.  The Backup Zone is another large file server that is designed to hold backups of the distributed databases on the appliance.  Compute nodes will be able to backup to the Backup Node in parallel via the high speed Infiniband connections that connect the nodes.  From the backup node, organizations will be able to offload their backups through their normal procedures.  Backups of a PDW database can only be restored to another PDW appliance with at least as many compute nodes as the database had when backed up.

If the Control Nodes in the Control Rack are considered the brains of the operation, the Compute Nodes in the Data Rack are certainly the brawn.  It is here within the Data Rack that all user data is stored and processed during query execution.  Each Data Rack has between 8-10 compute nodes.  Additionally, the Data Rack uses Microsoft Failover Clustering to gain high availability.  This is accomplished by having a spare node within the rack that acts as a passive node within the cluster.  Essentially, each compute node has its affinity set to failover to the spare node in the event of a failure on the active Compute Node.

Each compute node runs an instance of SQL Server and owns its own dedicated storage array.  User data is stored on the dedicated Storage Area Network.  The local disks on the Compute Node are used for TempDB.   The user data will be stored in one of two configurations:  Replicated tables or Distributed tables.  A replicated table is duplicated in whole on each Compute Node in the appliance.  When you think replicated tables in PDW, think small tables, usually dimension tables.  Distributed tables, on the other hand, are hash distributed across multiple nodes.  This horizontal partitioning breaks the table up into 8 partitions per compute node.  Thus, on a PDW appliance with eight compute nodes, a distributed table will have 64 physical distributions.  Each of these distributions (essentially a table in and of itself) have dedicated CPU and disk that is the essence of Massively Parallel Processing in PDW.  To swag some numbers, if you have a 1.6 TB fact table that you distribute across an eight node data rack, you would have 64 individual 25 GB distributions with dedicated CPU and disk space.  This is how the appliance can break down a large table into manageable sizes to find the data needed to respond to queries.  I’ll speak to this in more detail in the future.

If your data set is too large to store on a single data rack, you can add another.  By adding an additional data rack, not only expand your storage but you also significantly increase your processing power and the data will be distributed across additional distributions.  The current target size of an appliance is up to forty nodes, which would be either 4-5 data racks, depending on the manufacturer.  Larger appliance sizes are expected in the future.

Retrieve the list of Top executed SQL Queries


You will come to a point were you need to know more about your currently running queries that consume a huge amount of memory and physical I/O. Below are a list of Queries that I use to get the list of Executed queries from different perspectives

Top Executed Queries

   1: -- Get Top 100 executed SP's ordered by execution count

   2: SELECT TOP 100 qt.text AS 'SP Name', qs.execution_count AS 'Execution Count',

   3: qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second',

   4: qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',

   5: qs.total_worker_time AS 'TotalWorkerTime',

   6: qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime',

   7: qs.max_logical_reads, qs.max_logical_writes, qs.total_physical_reads,

   8: DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache'

   9: FROM sys.dm_exec_query_stats AS qs

  10: CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

  11: WHERE qt.dbid = db_id() -- Filter by current database

  12: ORDER BY qs.execution_count DESC

  13:  

Top (CPU Usage) Queries

   1: -- Get Top 20 executed SP's ordered by total worker time (CPU pressure)

   2: SELECT TOP 20 qt.text AS 'SP Name', qs.total_worker_time AS 'TotalWorkerTime',

   3: qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',

   4: qs.execution_count AS 'Execution Count',

   5: ISNULL(qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()), 0) AS 'Calls/Second',

   6: ISNULL(qs.total_elapsed_time/qs.execution_count, 0) AS 'AvgElapsedTime',

   7: qs.max_logical_reads, qs.max_logical_writes,

   8: DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache'

   9: FROM sys.dm_exec_query_stats AS qs

  10: CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

  11: WHERE qt.dbid = db_id() -- Filter by current database

  12: ORDER BY qs.total_worker_time DESC

Top (Logical Reads) Queries

   1: -- Get Top 20 executed SP's ordered by logical reads (memory pressure)

   2: SELECT TOP 20 qt.text AS 'SP Name', total_logical_reads,

   3: qs.execution_count AS 'Execution Count', total_logical_reads/qs.execution_count AS 'AvgLogicalReads',

   4: qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second',

   5: qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',

   6: qs.total_worker_time AS 'TotalWorkerTime',

   7: qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime',

   8: qs.total_logical_writes,

   9: qs.max_logical_reads, qs.max_logical_writes, qs.total_physical_reads,

  10: DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache', qt.dbid

  11: FROM sys.dm_exec_query_stats AS qs

  12: CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

  13: WHERE qt.dbid = db_id() -- Filter by current database

  14: ORDER BY total_logical_reads DESC

Connecting to Power Pivot Excel Sheets from Performance Point Services 2010


I always thought that the new BI Symantec Model is disconnected from Performance Point then I came across an article on MSDN Performance Point Blog.

http://blogs.msdn.com/b/performancepoint/archive/2011/12/21/will-sql-server-quot-denali-quot-models-work-as-performancepoint-data-sources.aspx

This will allow you to connect models that the end Users Build and Publish it to Performance Point as data source for your Dashboards

Read the below Article

Since SQL Server Denali hit the public download space, several people have tried using it to create PowerPivot models as data sources for PerformancePoint 2010. And why shouldn’t they? PerformancePoint works just fine in the scenario where you specify the reference to a PowerPivot model in the data source connection string for an Analysis Services data source.

The Problem

But just about every one of those cutting-edged users has wisely pointed out that something fundamentally breaks when trying this same scenario with a Denali RC0-based PowerPivot workbook. There are a couple of reasons for this, which I address below. The error that manifests in the event log indicates that the existing, installed provider is too old. And that’s true if you’re running a standard SharePoint 2010 SP1 installation. The error most people have been seeing is this:

Exception details: Microsoft.AnalysisServices.AdomdClient.AdomdUnknownResponseException: Unsupported data format : application/vnd.ms-excel.12

The good news is that a workaround exists.

The Solution

The easiest way to make this scenario work is to install the ADOMD.Net (version 10.5) data provider on your SharePoint farm. You can get this by downloading it from the SQL Server 2008 R2 SP1 Feature Pack (make sure that you download the version appropriate for your server architecture). Version 10.5 of this provider is backward compatible with the new version 11.0. And because it’s a minor version, it will overwrite the version 10.0 provider, which is the major version that PerformancePoint looks for at runtime. This latter notion explains the alert that the installer throws when you begin the installation. Go ahead and click, "Yes".

The Detail

I will use the example of a 1M farm in this case. SharePoint’s pre-requisite installer will install both the 2005 and 2008 versions of ADOMD.Net. This ensures that Analysis Services data connections work smoothly out of the box. If you look in add/remove programs on your SharePoint server, you can see the provider versions. Version 10.1 is the one that PerformancePoint will use when connecting to the PowerPivot workbook when you use it as a data source.

When you install SQL Server Denali, it will install the version 11.0 ADOMD data provider. You can see this as well by looking at the list of assemblies on the server.

The problem is that there is no policy telling the request to use the latest version of the provider, and PerformancePoint is looking specifically for major version 10. So, in effect, version 11.0 is sitting there all alone and unused. However, when you install the ADOMD client version 10.5, things work because that version is compatible with the latest PowerPivot models, and PerformancePoint is still happy because it sees version 10, just like it expects to. Unlike major versions of the provider, which exist side-by-side, minor versions overwrite each other, which is why PerformancePoint grabs the right one even after you overwrite version 10.1 with version 10.5. So, after installing the SQL Server 2008 R2 SP1 update, when you look at the list in add/remove programs, you will see this:

Once you have updated your provider, PerformancePoint should be able to connect to the PowerPivot workbook as you would expect.

There are a couple of parting notes here. First, notice that the name of the cube, as interpreted by PerformancePoint, is now called "Model". You may recall that it was called "Sandbox" for PowerPivot 2010 models. Second, most of the published documentation indicates that you need to add the PROVIDER property to the PerformancePoint connection string, like this: PROVIDER=MSOLAP;DATA SOURCE=http://contoso/Documents/PowerPivot_Sample.xlsx. This string is a little redundant; there is no need to set the provider property because the provider is adomd.net. And adomd.net will create the right data access object without the user having to specify MSOLAP to connect to the data source.

Performance Monitoring Important Counters things you need to keep an Eye on


I believe this will be a long thread of posts discussing each one of the below Counters and why do you need to know more about them.

Memory – Available MBytes
Paging File – % Usage
Physical Disk – Avg. Disk sec/Read
Physical Disk – Avg. Disk sec/Write
Physical Disk – Disk Reads/sec
Physical Disk – Disk Writes/sec
Processor – % Processor Time
SQLServer: Buffer Manager – Buffer cache hit ratio
SQLServer: Buffer Manager – Page life expectancy
SQLServer: General Statistics – User Connections
SQLServer: Memory Manager – Memory Grants Pending
SQLServer: SQL Statistics – Batch Requests/sec
SQLServer: SQL Statistics – Compilations/sec
SQLServer: SQL Statistics – Recompilations/sec
System – Processor Queue Length

2. Collecting Data Files Info


The Second Step You need to handle when running analysis on a SQL Server. Is your Storage. 90% of your Problems are coming from your I/O WHY ? Storage is still the only mechanical part on your server. you can read pages 100 MB/s from the fastest HDD Available. you need to know the load that your Data files have to see how can you partition them on Different Storage Drives.

 

   1: Select DB_NAME(states.database_id) As DatabaseName,

   2: states.file_id,

   3: Files.physical_name,

   4: num_of_reads,

   5: num_of_bytes_read,

   6: (io_stall_read_ms/1000) As io_stall_read,

   7: num_of_writes,

   8: num_of_bytes_written,

   9: io_stall/1000 as io_stall,

  10: io_stall_write_ms/1000 as io_stall_write,

  11: (size_on_disk_bytes/1024)/1024 as SizeInMBs

  12:   From Sys.dm_io_virtual_file_stats(NULL,NULL) As states Inner Join sys.master_files As Files on (states.file_id = Files.file_id

  13:   And states.database_id = Files.database_id)

  14:   Order BY DatabaseName

The Above Query will allow you to get the Database files and the number of read and writes against the file and the amount of bytes read and wrote to this file

 

Some things you need to keep in mind

1. The Stall is the amount of time the file waiting in Queue to write our read from the data file. the Higher stall the more problems you have

2. The DMVs starts collecting the data from the moment you start the SQL Service and counters will reset once you restart