How to Install Reporting Services In Integrated Mode after SharePoint

  1. Click the Start button

  2. Click the Microsoft SharePoint 2010 Products group.

  3. Right-click SharePoint 2010 Management Shell click Run as administrator.

  4. Run the following PowerShell command to install the SharePoint service. A successful completion of the command displays a new line in the management shell. No message is returned to the management shell when the command completes successfully:


  5. Run the following PowerShell command to install the service proxy:


  6. Run the following PowerShell command to start the service or see the following notes for instructions to start the service from SharePoint Central administration:


    get-spserviceinstance -all |where {$_.TypeName -like "SQL Server Reporting*"} | Start-SPServiceInstance

You can also start the service from SharePoint central Administration rather than running the third PowerShell command. The following steps are also useful to verify that the service is running.

  1. In SharePoint Central Administration, click Manage Services on Server in the System Settings group.

  2. Find SQL Server Reporting Services Service and click Start in the Action column.

  3. The status of the Reporting Services service will change from Stopped to Started. If the Reporting Services service is not in the list, use PowerShell to install the service.


Stacked Bar chart Component Windows Phone 7

I was in a Process of Building a windows Phone 7 Dashboard for my Team, and I came across a requirement of building a Stacked bar Chart so I started looking for ready made Components as free tools or paid to do this for me, most of the components I found where very very expensive and too slow when added them to my application because it contains soo many features that I didn’t want so I thought why don’t I design mine

I’ve attached Here my Project that contains a Single Stacked Bar Component I hope it will be helpful for others.

The Charting Components


PDW–The Architecture

In Order to Talk about PDW you need to get to know what is MPP (Massive Parallel Processing of Data).


(Massively Parallel Processing or Massively Parallel Processor) A multiprocessing architecture that uses many processors and a different programming paradigm than the common symmetric multiprocessing (SMP) found in today’s computer systems.
Self-Contained MPP Subsystems
Each CPU is a subsystem with its own memory and copy of the operating system and application, and each subsystem communicates with the others via a high-speed interconnect. In order to use MPP effectively, an information processing problem must be breakable into pieces that can all be solved simultaneously. In scientific environments, certain simulations and mathematical problems can be split apart and each part processed at the same time. In the business world, a parallel data query (PDQ) divides a large database into pieces. For example, 26 CPUs could be used to perform a sequential search, each one searching one letter of the alphabet. To take advantage of more CPUs, the data have to be broken further into more parallel groups.
In contrast, adding CPUs in an SMP system increases performance in a more general manner. Applications that support parallel operations (multithreading) immediately take advantage of SMP, but performance gains are available to all applications simply because there are more processors. For example, four CPUs can be running four different applications.

Why ?

    Customers need more execute more queries against a large Database with the ability to retrieve more data faster and a single point of access is not enough

    Make sure the system is scalable with linear state when more data is added


The SQL Server Architecture

The SQL Server PDW Consists of at least 2 Appliances (Control Rack and Data Rack).


The Control Rack consists of six Nodes : two Management Nodes, two Control Nodes, the Landing Zone, and the Backup Node.  Storage Area Network’s (SAN) are also included for the Control Node, Landing Zone, and Backup Node.  Additionally, the control rack ships with dual Infiniband, Ethernet, and Fiber switches needed for the rack. 

•The Management Node:

–Is responsible for Management of Data Nodes and failover instances and new Data nodes monitoring Rack Status.

•The Control Node:

  • It is the Brain of the PDW Appliance it is responsible for managing queries to the Compute node inside the Data Rack and consolidate the result and return it to the Application.
  • It is the Node that manage the node that will host the insert operations of the compute node
  • So it’s really a brain

•The Landing Zone

  • It’s Staging Area has it’s own SAN Storage (around 1.8 TB of Data) and it holds the data
  • –It’s ETL Layer that is used to load the data into the appliance when Required.

•The Backup and restore Zone

  • This node is responsible for managing the backup and restore operations of the appliance.
  • Sending the Data to a disaster recovery site is a part of it’s responsibilities.

Because the appliance is designed to work out of the box, it includes its own Active Directory that is housed within the Management Node.  There are several reasons why PDW needs Active Directory, one of which is that we use Microsoft Clustering Services (MCS) within the appliance and MCS requires domain accounts for certain services to run.  Additionally the Management Node includes High Performance Computing (HPC) that is used during the initial install and for ease in management of the nodes within the appliance.

The Control Node is where user requests for data will enter and exit the appliance.  On the control nodes, queries will be parsed and then sent to compute nodes for processing.  Additionally, the metadata of the appliance and distributed databases is located here.  Essentially, the control node is the brains of the operation.  No persisted user data is located here, that all exists on the compute nodes within the data racks.  User data can be temporarily aggregated on the control node during query processing and then dropped after sent back to a client.

The Landing Zone is essentially a large file server with plenty of SAN storage to provide a staging ground for loading data into the appliance.  You will be able to load data either through the command line with DWLoader or through SSIS which now has an connector  for PDW.  The Backup Zone is another large file server that is designed to hold backups of the distributed databases on the appliance.  Compute nodes will be able to backup to the Backup Node in parallel via the high speed Infiniband connections that connect the nodes.  From the backup node, organizations will be able to offload their backups through their normal procedures.  Backups of a PDW database can only be restored to another PDW appliance with at least as many compute nodes as the database had when backed up.

If the Control Nodes in the Control Rack are considered the brains of the operation, the Compute Nodes in the Data Rack are certainly the brawn.  It is here within the Data Rack that all user data is stored and processed during query execution.  Each Data Rack has between 8-10 compute nodes.  Additionally, the Data Rack uses Microsoft Failover Clustering to gain high availability.  This is accomplished by having a spare node within the rack that acts as a passive node within the cluster.  Essentially, each compute node has its affinity set to failover to the spare node in the event of a failure on the active Compute Node.

Each compute node runs an instance of SQL Server and owns its own dedicated storage array.  User data is stored on the dedicated Storage Area Network.  The local disks on the Compute Node are used for TempDB.   The user data will be stored in one of two configurations:  Replicated tables or Distributed tables.  A replicated table is duplicated in whole on each Compute Node in the appliance.  When you think replicated tables in PDW, think small tables, usually dimension tables.  Distributed tables, on the other hand, are hash distributed across multiple nodes.  This horizontal partitioning breaks the table up into 8 partitions per compute node.  Thus, on a PDW appliance with eight compute nodes, a distributed table will have 64 physical distributions.  Each of these distributions (essentially a table in and of itself) have dedicated CPU and disk that is the essence of Massively Parallel Processing in PDW.  To swag some numbers, if you have a 1.6 TB fact table that you distribute across an eight node data rack, you would have 64 individual 25 GB distributions with dedicated CPU and disk space.  This is how the appliance can break down a large table into manageable sizes to find the data needed to respond to queries.  I’ll speak to this in more detail in the future.

If your data set is too large to store on a single data rack, you can add another.  By adding an additional data rack, not only expand your storage but you also significantly increase your processing power and the data will be distributed across additional distributions.  The current target size of an appliance is up to forty nodes, which would be either 4-5 data racks, depending on the manufacturer.  Larger appliance sizes are expected in the future.

SQL Server Trace Flags

Trace flags are required to modify Session and Server state very helpful when dealing with system Diagnostics and system maintenance. Trace flags are also dangerous so always think what do you need to enable or disable a specific trace flag.

Enabling and Disabling Trace flags can be done using multiple ways, SQL Server Configuration Manager, DBCC or Command line.

to enable trace flag using DBCC (my Favourite) use the following command


To Disable trace flag:


To Check for trace flag Status


There are 2 Different types of Trace Flags Documented , and None Documented

Below are a list of the most important ones

    • Trace Flag 610

      •Trace flag 610 controls minimally logged inserts into indexed tables

      •Allows for high volume data loading

      •Less information is written to the transaction log

      •Transaction log file size can be greatly reduced

      •Introduced in SQL Server 2008

      •“Very fussy”


      •Data Loading Performance Guide white paper


    • Trace Flag 834

      •Trace flag 834 allows SQL Server 2005 to use large-page allocations for the memory that is allocated for the buffer pool.

      •May prevent the server from starting if memory is fragmented and if large pages cannot be allocated

      •Best suited for servers that are dedicated to SQL Server 2005

      •Page size varies depending on the hardware platform

      •Page size varies from 2 MB to 16 MB.

      •Improves performance by increasing the efficiency of the translation look-aside buffer (TLB) in the CPU

      •Only applies to 64-bit architecture


    • Trace Flag 835

      •Trace flag 835 enables “Lock Pages in Memory” support for SQL Server Standard Edition

      •Enables SQL Server to use AWE APIs for buffer pool allocation

      •Avoids potential performance issues due to trimming working set

      •Introduced in:

      •SQL Server 2005 Service pack 3 Cumulative Update 4

      •SQL Server 2008 Service Pack 1 Cumulative Update 2

      •Only applies to 64-bit architecture


    • Trace Flag 1211

      •Trace flag 1211 disables lock escalation based on memory pressure or number of locks

      •Database engine will not escalate row or page locks to table locks

      •Scope: Global | Session

      •Documented: BOL

      •Trace flag 1211 takes precedence over 1224

      •Microsoft recommends using 1224

      •Trace flag 1211 prevents escalation in every case, even under memory pressure

      •Helps avoid "out-of-locks" errors when many locks are being used.

      •Can generate excessive number of locks

      •Can slow performance

      •Cause 1204 errors

    • Trace Flag 3226

      •Trace flag 3226 prevents successful back operations from being logged

      •By default SQL Server logs every successful backup operation to the ERRORLOG and the System event log

      •Frequent backup operations can cause log files to grow and make finding other messages harder

      •Documented: BOL

    • Trace Flag 4199 /* IMPORTANT */

      •Trace flag 4199 enables all the fixes that were previously made for the query processor under many trace flags


      •Any hotfix that could potentially affect the execution plan of a query must be controlled by a trace flag

      •Except for fixes to bugs that can cause incorrect results or corruption

      •Helps avoid unexpected changes to the execution plan

      •Which means that virtually everyone is not necessarily running SQL Server with all the latest query processor fixes enabled

      •Scope: Session | Global

      •Documented: KB974006

      •Consider enabling for “virgin” SQL Server deployments?

      •Microsoft are strongly advising not to enable this trace flag unless you are affected

There is more you can find on MSDN site for more info but for me these the the most Important DOCUMENTED Trace flags

for undocumented trace flags I didn’t try any my self but here you can find some of what you are looking for

Trace Flag 3004

Most Database Administrators are aware of instant file initialization. In a nutshell, when instant file initialization is enabled the data files do not need to be zeroed out during creation. This can save an incredible amount of time during the restoration of VLDBs. As you can imagine, the zeroing out of a 1 TB data file can take a very long time.

Trace flag 3004 turns on information regarding instant file initialization. Enabling this trace flag will not make this information available to view. You will still need to turn on trace flag 3605 to send this information to the error log.

Trace Flag 3014

Trace flag 3014 provides detailed information regarding the steps performed during the backup and restore process. Normally, SQL Server only provides a limited amount of information in the error log regarding these processes. By enabling this trace flag you’ll be able to see some very detailed and interesting information.

Trace Flag 3604

Trace flag 3604 can be used under a variety of circumstances. If you’ve ever used DBCC IND or DBCC PAGE then you’ve probably already used trace flag 3604. It simply informs SQL Server to send some DBCC output information to the screen instead of the error log. In many cases, you have to use this trace flag to see any output at all.

Trace Flag 3605

Trace flag 3605 will send some DBCC output to the error log. This trace flag needs to be enabled to see the instant file initialization information made available by trace flag 3004

What’s New in T-SQL SQL Server 2012–Result Set

Lets Build a Scenario,

Imagine your need to Standardize your Stored procedure output for dynamic batching. as a matter of fact if the output of your Stored procedure is different from what you expect your query will fail.

So you need to standardize your output from stored procedure. to Do that lets welcome the new SQL Server 2012 T-SQL Feature Result Set.

Result Set will let the format the stored procedure output to map your requirements no matter how the data is formatted internally. Lets take a look using the following code sample

EXECUTE <batch_or_proc> WITH <result_sets_option>;

Sample Code for that is


EmployeeId INT,

EmployeeName VARCHAR(150)



Some notes to consider when using Result Set

  • Sometimes if you want to restrict a stored procedure to return a result set you can use the RESULT SETS NONE clause.
  • The WITH RESULT SETS option cannot be specified in an INSERT…EXEC statement.
  • The number of columns being returned as part of result set cannot be changed