Category Archives: MS Windows

Performance and Tuning: Page Life Expectancy

SQL Server 2012 – Performance and Tuning: Page Life Expectancy

The Buffer Manager object provides counters to monitor how SQL Server uses:
* Memory to store data pages.
* Counters to monitor the physical I/O as SQL Server reads and writes database pages.
* Buffer pool extension to extend the buffer cache by using fast non-volatile storage such as solid-state drives (SSD).

A Performance Profile: how one would approach performance tuning each.  The performance profile of OLTP differs significantly from a Relational Data Warehouse or Reporting application.  It is helpful to understand these differences and the objectives for high performance.

OLTP applications are characterized by high volumes of small identical transactions.  These can include SELECT, INSERT, UPDATE and DELETE operations.  The implications are significant in terms of database design, resource utilization and system performance.

You can find the value of the PLE by running the following query.

SELECT [object_name],



FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Manager%’

AND [counter_name] = ‘Page life expectancy’

There are performance problems if any of the following are true, here we go

Resource issue Rule Description Value Source Problem Description
Database Design Rule 1 High Frequency queries having # table joins >4 Sys.dm_exec_sql_text,


High Frequency queries with lots of joins may be too normalized for high OLTP scalability
Rule 2 Frequently updated tables having # indexes >3 Sys.indexes, sys.dm_db_operational_index_stats Excessive index maintenance for OLTP
Rule 3 Big IOs

Table Scans

Range Scans

>1 Perfmon object

SQL Server Access Methods


Missing index, flushes cache
Rule 4 Unused Indexes index not in* * Sys.dm_db_index_usage_stats Index maintenance for unused indexes
CPU Rule 1 Signal Waits > 25% Sys.dm_os_wait_stats Time in runnable queue is

pure CPU wait.

Rule 2 Plan re-use < 90% Perfmon object

SQL Server Statistics

OLTP identical transactions should ideally have >95% plan re-use
Rule 3 Parallelism: Cxpacket waits >5% Sys.dm_os_wait_stats Parallelism reduces OLTP throughput
Memory Rule 1 Avg page life expectancy < 300 (seconds) Perfmon object

SQL Server Buffer Manager

SQL Server Buffer Nodes

Cache flush, due to big read

Possible missing index

Rule 2 Avg page life expectancy Drops by 50% Perfmon object

SQL Server Buffer Manager

Cache flush, due to big read

Possible missing index

Rule 3 Memory Grants Pending >1 Perfmon object

SQL Server Memory Manager

Current number of processes waiting for a workspace memory grant
IO Rule 1 Avg Disk seconds / read > 20 ms Perfmon object

Physical Disk

Reads should take 4-8ms with NO IO pressure
Rule 2 Avg Disk seconds / write > 20 ms Perfmon object

Physical Disk

Writes (sequential) can be as fast as 1ms for transaction log.
Rule 3 Big IOs

Table Scans

Range Scans

>1 Perfmon object

SQL Server Access Methods

Missing index, flushes cache
Rule 4 If Top 2 values for wait stats are any of the following:






Top 2 Sys.dm_os_wait_stats If top 2 wait_stats values include IO, there is an IO bottleneck
Blocking Rule 1 Block percentage > 2% Sys.dm_db_index_operational_stats Frequency of blocks
Rule 2 Block process report 30 sec Sp_configure, profiler Report of statements
Rule 3 Avg Row Lock Waits > 100ms Sys.dm_db_index_operational_stats Duration of blocks
Rule 4 If Top 2 values for wait stats are any of the following:

1. LCK_x

Top 2 Sys.dm_os_wait_stats If top 2 wait_stats values include locking, there is a blocking bottleneck

OLTP identical transactions mean plan re-use is desirable.  CPU utilization can thusly be reduced with plan re-use and join reduction.  IO performance can be improved with good indexing, join reduction, and high page life expectancy.  Sorts can be limited with index usage.  Blocking can be reduced with index design and short transactions.


— Microsoft Certified Solutions Associate (MCSA)

Windows 30 years

Old Windows – 30 years 











windows 1.0










windows 1981










windows 1995










windows 1998

windows 2000 windows Me windows xp































Old Windows 30 years Vs OS X, Microsoft chooses to roll out Windows 10 bit by bit in 5 years, more info

Microsoft is giving away Windows 10 for free 🙂 Nothing Is Free, what i heard all time. How comes from Satya Ji ?

CEO Satya Nadella says he wants to have 1 billion devices running Windows 10 in three years. Microsoft estimates there are 1.5 billion people who currently use some kind of Windows. Microsoft makes money from selling advertising for its Bing search engine, and Windows 10 comes with many apps that steer people to Bing. The company also collects fees from people who use premium versions of its Office software, OneDrive cloud storage and Skype.mMicrosoft plans to phase out maintenance and security support for Windows 7 over the next five years, and for Windows 8 by 2023 (That’s correct), as it did with the older Windows XP.


Troubleshooting SQL Server problems

MS SQL Server is a separate technology, it is often the case that when problems occur, uncertainty arises regarding the root cause, and problem scope is rarely well outlined. Although many of the details here are not specific to SQL Server problems, they are good practices for troubleshooting many types of complex IT issues. On correct understanding the problem and its cause therefore, accurate root cause diagnosis is very important. Sometimes it is obvious what caused the problem, but not always.

Here we go regardless of the problem, you will need to do some forensic analysis to determine the cause of the problem.

1. Define the problem/ establish a clear problem statement.
2. Determine the problem’s impact/ The IT chef & stake holders often don’t want to know technical details.
3. Engage the correct resources/ both internal or external.
4. Identify potential causes/ meet all necessary parties physically or virtually.
5. Plan and coordinate tasks across teams.
6. Select a communication plan and review/ document the plan and agree who will keep management, end users, and the technical team updated.
7. Identify root cause.
8. Determine solution.
9. Test and implement/ even if the problem does not exist in the test or pre-production environment, implement the fix there first.
10. Review

SQL Server is typically affected by the following bottlenecks:
🙂 Memory
🙂 File I/O
🙂 Locking, blocking, or deadlocking

Start with SQL Server Performance Monitor, make sure that the disk counters are on. To do so, run diskperf from a command prompt (cp). If the disk counters are not on, run diskperf -y and then restart the computer. SQL Server Performance Monitor log, collect the following information:

🙂 Paging file
🙂 Process
🙂 Processor
🙂 All SQL Server counters
🙂 Memory
🙂 Threads
🙂 Logical disk
🙂 Physical disk
🙂 System

In most cases, you should be able to find enough information using these tools to provide you with enough clues to determine the cause of a particular problem. A good DBA troubleshooting approach, adopting a positive attitude with moderate determination and persistence to identify the root cause and resolve issues sure helps.

A positive attitude leads to better quality results, faster resolution, and it will reduce the stress level for you and co-workers during the troubleshooting process.

SQL DBA behaviors and attitudes are characteristic of the most effective database professionals when troubleshooting complex problems. Remain calm, problems are never random — Problems with computers happen for a reason. Avoid prejudice OR looking for fixes, think ahead, it’s common to see spend a lot of time troubleshooting numerous issues that have nothing to do with the main source of the problem. Determine the problem’s real impact on the company business.

🙂 How severely is the system affected?
🙂 How many users cannot work?
🙂 Is money being lost? quantify the amount.
🙂 What is the visibility of the issue?
🙂 Are external customers affected?
🙂 Could any compliance obligations be breeched?
🙂 How serious are the consequences if the problem persists?

IT and business agreements a service-level agreement (SLA), recovery point objective (RPO) and recovery time objective (RTO). Environment overview (network diagram, application architecture), Windows System and Application Event Logs and SQL Server Error Logs, give time to understand the problem and application environment often leads to a higher quality and faster problem resolution. Gather all relevant logs to a single location:

🙂 Windows and System Event logs
🙂 SQL Server Error Logs
🙂 Dump fi les
🙂 Application logs

Many problems begin life as an application behavior or performance issue, and there may be other software components or interactions that could affect the database platform. Simply DBA bring in and nothing to with data or database, anyway what to look for when troubleshooting each major problem.

🙂 Connectivity issues -Netowrk (NIC)
🙂 Performance issues — For a performance problem you need to determine if the problem is on the client, the middle tier, the server on which SQL Server runs, or the network.
🙂 Hardware bottlenecks — Identify resource contention around disk(Review the PerfMon disk counters for Average Disk Sec/Read and Average Disk Sec/Write to verify that the time to make a read or write is ideally below 20 to 30 milliseconds for OLTP systems, higher for decision support systems), CPU (maximum degree of parallelism), network, or memory(AWE feature was discontinued in SQL Server 2012).
🙂 SQL Server issues — As well as hardware contention, SQL Server has fi nite internal resources, such as locks, latches, worker threads, and shared resources such as tempdb.
🙂 Compilation issues — If possible, identify one user query that is slow, the most common causes are insufficient resources. This could be caused by a sub-optimal query plan as a result of missing or outdated statistics, or ineffi cient indexes. SQL Server performance issues with simple tools Data Collector Set, performance trace using the most basic tool:


* Click on Start | Run and type Perfmon.exe

* Go to “Data Collector sets”, expand it and then right click on “User Defined”.

* Choose to create a new “Data Collector Set”.

* Give a name to the set, e.g. “SQLPerf”. Also choose to “Create Manually (Advanced)”. Click on Next.

* Then choose to “create data logs” and check only the “Performance counter” option. Click on Next.

DBA like to add all the counters and all instances from the below Perfmon objects, so that we can capture all the details. Operating System Objects:

* Memory

* Network Interface

* Objects

* Paging File

* Physical Disk

* Process

* Processor

* Redirector

* Server

* Server Work Queues

* System

* Thread

SQL Server Objects:

* SQLServer: Access Methods

* SQLServer: Buffer manager

* SQLServer: Locks

* SQLServer: SQL Statistics

I usually set the interval of the Perfmon Log to 10 to 20 seconds. After you stop the trace, it should be saved have a file extension of .blg. After all this hardwork what comes out, the latter case usually indicates that the performance problem is somewhere outside of SQL Server. Other think, dba want to eascape from problemo and complain other dude’s (:

Let me know if you have any further question and your comments will be learning point.

Microsoft Certified Solutions Associate (MCSA)

Memory configuration

Configuration ???

As with security configuration, you should make server and database configuration changes with care. Most of the configurable settings in SQL Server work best when left at their default values. You should measure any changes from the defaults for performance and stability impacts in a controlled testing environment before introducing those changes into production.
Like any change, configuration changes should be accompanied by a detailed change log. It’s often tempting to flick the switch on a configuration setting on the assumption that it may improve performance. We then look at other configuration categories: CPU, SQL Server settings, and operating system configuration.

Memory configuration:-
In a SQL Server 2008 can address the amount of memory supported by the underlying operating system. However, like previous versions, 32-bit editions of SQL Server 2008 are constrained to 2GB of RAM unless configured with special settings. Let’s begin memory configuration with a look at 32-bit memory management.

32-bit memory management:-
All 32-bit systems can natively address a maximum of 4GB of memory (232 = 4,294,967,296 bytes). Until recent times, this limitation wasn’t an issue; a quick scan of older documentation reveals terms such as very large when referring to memory beyond 4GB. In today’s terms, systems with 8 or 16GB of RAM are considered normal, making correct memory configuration in 32-bit systems very important in order to derive the maximum performance benefit.
Apart from installing 64-bit versions of Windows and SQL Server, there are two ways of providing SQL Server with more than 2GB of memory; using the /3GB option or using Address Windowing Extensions (AWE) with the /PAE option. /3GB Of the 4GB of RAM that a 32-bit system can natively address, 2GB is reserved by Windows, leaving applications such as SQL Server with a maximum of 2GB. In Windows Server 2003, we used the /3GB option in the boot.ini file to limit Windows to 1GB of memory, enabling SQL Server to access up to 3GB. In Windows Server 2008, we use the BCDEdit command with the increaseuserva option with an optional parameter that determines the size of the available user space, such as 3072 for 3GB.

For 32-bit systems with 4GB of RAM, these options are a good way of squeezing more memory out of Windows for use by SQL Server, but limiting Windows to 1GB of RAM isn’t always trouble free, particularly on systems with a large number of drivers and/or drivers that use a large amount of memory. Depending on the server configuration, these options may actually reduce performance and reliability, so use them with care. For 32-bit systems with more than 4GB of RAM, we can use the /PAE option.

Intel first introduced 36-bit Physical Address Extensions (PAEs) & AWE (Allocate Memory option)


Despite the increased memory that can be accessed with PAE/AWE, there are some limitations when used by SQL Server in 32-bit environments:
** Memory above 4GB accessed using PAE/AWE can only be used by the SQL Server data cache. The procedure cache, used for query compilation plans, isn’t able to take advantage of this memory. We’ll cover the procedure cache in more detail in chapter 17, along with a number of related settings, such as forced parameterization.

** Analysis Services and Integration Services components aren’t able to utilize memory accessed using PAE/AWE.

** Unlike a flat 64-bit environment, there’s some overhead in mapping into the AWE memory space in 32-bit systems. On 32-bit AWE-enabled systems, the service account running the SQL Server service must be given the Lock Pages in Memory right.  As a consequence, AWE memory isn’t paged out to disk by the operating system. As you can see above you assign this right to an account by using the Windows Group Policy Editor. So if the /PAE option allows us to address memory above 4GB and /3GB allows us to get an extra 1GB from Windows below 4GB, then to obtain the maximum amount of memory for SQL Server we should use both, right? Well, maybe Yes or maybe not …/3GB AND /PAE.

When using PAE, Windows uses memory below 4GB to map to memory above 4GB. The more memory above 4GB to map to, the more memory below 4GB is required for the mapping. The magic number is 16GB.


If you do, only 16GB will be addressable, and any additional memory beyond that is wasted. the /3GB option is known to cause stability issues in some circumstances, so even with systems containing between 5GB and 16GB of RAM, you must use this setting with caution. One of the nice things about 64-bit systems is that all of the configuration issues we’ve just covered are no longer of concern.

64-bit memory management:-
Unlike 32-bit systems, 64-bit systems don’t require the memory configuration just described. The full complement of system RAM can be accessed by all SQL Server components without any additional configuration. The one optional memory configuration for 64-bit systems is setting the Lock Pages in Memory right, as covered earlier. While this setting is optional for a 64-bit system, locking pages in memory is beneficial in order to prevent Windows from paging out SQL Server’s memory. If you don’t enable this setting, certain actions such as large file copies can lead to memory pressure with Windows paging, or trimming, SQL Server’s memory. This sometimes leads to a sudden and dramatic reduction in SQL Server performance, usually accompanied by the “A significant part of sql server process memory has been paged out…” message. Setting the Lock Pages in Memory option prevents such incidents from occurring, and is therefore a recommended setting. Note that Windows Server 2008 handles memory trimming a lot better than 2003.

Regardless of the processor platform (32- or 64-bit), one of the important memory configuration tasks is to set the minimum and maximum server memory values.

Setting minimum and maximum memory values:-
When SQL Server starts, it acquires enough memory to initialize, beyond which it acquires and releases memory as required. The minimum and maximum memory values control the upper limit to which SQL Server will acquire memory (maximum), and the point at which it will stop releasing memory back to the operating system (minimum).

The minimum and maximum memory values for an instance can be set in SQL Server Management Studio or by using the sp_configure command. By default, SQL Server’s minimum and maximum memory values are 0 and 2,147,483,647, respectively. The default min/max settings essentially allow SQL Server to cooperate with Windows and other applications by acquiring and releasing memory in conjunction with the memory requirements of other applikations. On small systems with a single SQL Server instance, the default memory values will probably work fine in most cases, but on larger systems we need to give this a bit more thought. Let’s consider a number of cases where setting min/max values is required, beginning with systems that lock pages in memory.

As you know, when the Lock Pages in Memory setting is enabled, SQL Server’s memory will not be paged out to disk. This is clearly good from a SQL Server performance point of view, but consider a case where the maximum memory value isn’t set, and SQL Server is placed under significant load. The default memory settings don’t enforce an upper limit to the memory usage, so SQL Server will continue to consume as much memory as it can get access to, all of which will be locked and therefore will potentially starve other applications (including Windows) of memory. In some cases, this memory starvation effect can lead to system stability issues. In SQL Server 2005 and above, even with the Lock Pages in Memory option enabled, SQL Server will respond to memory pressure by releasing some of its memory back to the operating system. However, depending on the state of the server, it may not be able to release memory quickly enough, again leading to possible stability issues. For the reasons just outlined, systems that use the Lock Pages in Memory option should also set a maximum memory value, leaving enough memory for Windows.


A server containing multiple SQL Server instances needs special consideration when setting min/max memory values. Consider a server with three instances, each of which is configured with the default memory settings. If one of the instances starts up and begins receiving a heavy workload, it will potentially consume all of the available memory. When one of the other instances starts, it will find itself with very little physical memory, and performance will obviously suffer. In such cases, I recommend setting the maximum memory value of each instance to an appropriate level (based on the load profile of each).

On servers in which SQL Server is sharing resources with other applications, setting the minimum memory value helps to prevent situations in which SQL Server struggles to receive adequate memory. Of course, the ideal configuration is one in which the server is dedicated to SQL Server, but this is not always the case, unfortunately. A commonly misunderstood aspect of the minimum memory value is whether or not SQL Server reserves that amount of memory when the instance starts. It doesn’t.
When started, an instance consumes memory dynamically up to the level specified in the maximum memory setting. Depending on the load, the consumed memory may never reach the minimum value. the relationship between a server’s memory capacity and SQL Server’s minimum and maximum memory values.


Configuring memory maximums in a multi-instance cluster is important in ensuring stability during failover situations. You must ensure that the total maximum memory values across all instances in the cluster is less than the total available memory on any one cluster node that the instances may end up running on during node outage. Setting the maximum memory values in such a manner is important to ensure adequate and consistent performance during failover scenarios.

One of the important memory configuration considerations, particularly for 32-bit AWE systems and 64-bit systems that lock pages in memory, is the amount of memory to leave Windows. For example, in a dedicated SQL Server system with 32GB of memory, we’ll obviously want to give SQL Server as much memory as possible, but how much can be safely allocated? Put another way, what should the maximum memory value be set to? Let’s consider what other possible components require RAM:

*** Windows
*** Drivers for host bus adapter (HBA) cards, tape drives, and so forth
*** Antivirus software
*** Backup software
*** Microsoft Operations Manager (MOM) agents, or other monitoring software

The above non–SQL Server components, there are a number of SQL Server objects that use memory from outside of the buffer pool—that is, the memory area defined by the maximum memory setting. Memory for objects such as linked servers, extended stored procedures, and object linking and embedding (OLE) automation objects is allocated from an area commonly called the MemToLeave area.


As you can see, even on a dedicated server, there’s a potentially large number of components vying for memory access, all of which comes from outside the buffer pool, so leaving enough memory is crucial for a smooth-running server. The basic rule of thumb when allocating maximum memory is that the total of each instance’s maximum memory values should be at least 2GB less than the total physical memory installed in the server; however, for some systems, leaving 2GB of memory may not be enough. For systems with 32GB of memory, a commonly used value for Max Server Memory (totaled across all installed instances) is 28GB, leaving 4GB for the operating system and other components that we discussed earlier.

Given the wide variety of possible system configuration and usage, it’s not possible to come up a single best figure for the Max Server Memory value. Determining the best maximum value for an environment is another example of the importance of a load-testing environment configured identically to production, and an accurate test plan that loads the system as per the expected production load. Load testing in such an environment will satisfy expectations of likely production performance, and offers
the opportunity to test various settings and observe the resulting performance. One of the great things about SQL Server, particularly the recent releases, is its selftuning nature. Its default settings, together with its ability to sense and adjust, make the job of a DBA somewhat easier. You’ll see how these attributes apply to CPU configuration.

CPU configuration:-
When an instance of SQL server starts, it’s created as an operating system process. Unlike a simple application that performs a series of serial tasks on a single CPU, SQL Server is a complex application that must support hundreds or even thousands of simultaneous requests. In order to do this, the SQL Server process creates threads. A multithreaded operating system such as Windows allows applications like SQL Server to create multiple threads in order to maximize CPU efficiency and application throughput. Threads are assigned and balanced across the available CPUs in a server. If a thread is waiting on the completion of a task such as a disk request, SQL Server can schedule the execution of other threads in the meantime. The combination of a multithreaded architecture and support for multi-CPU servers allows applications such as SQL Server to support a large number of simultaneous requests. With this background in mind, let’s take a look at some of the configuration options that can be set to control the manner in which SQL Server threads are executed.

Boost SQL Server Priority option:-

Threads created in Windows are assigned a priority from 1 to 31, with thread priority 0 reserved for operating system use. Waiting threads are assigned to CPUs in priority order—that is, higher-priority threads are assigned for execution ahead of lower-priority threads.

By default, SQL Server threads are created with a “normal” priority level of 7. This priority ensures SQL Server threads are assigned and executed in a timely manner without causing any stability issues for anything else running on the server. The Boost SQL Server Priority option runs SQL Server threads at a priority level of 13, higher than most other applications. Although this sounds like a setting that should always be enabled, much like the “turbo” button on old PCs, it should only be used in rare circumstances.

Enabling this option has resulted in problems, such as not being able to shut down the server and various other stability issues. In almost all cases, on well-configured dedicated servers, the performance difference is likely to be negligible at best. Unless you have a very accurate load-testing environment in which you can prove this option results in a measurable performance boost and doesn’t cause any other stability and performance issues, use the default setting.Good luck with confugiration and setting.
This post is already very long. Lets save that future posts in 2014.
Welcome to 2014 and Good bye to 2013, to all readers.

Microsoft Certified Solutions Associate (MCSA)

Database file configuration

Database file configuration

When it comes to individual databases, there are a number of recommended configuration steps that SQL Server doesn’t perform, in large part due to dependencies on disk configuration and unknown future usage of the databases.

Before covering specific file configuration recommendations, let’s address some of the terms used when discussing database files:

* Primary data file —The primary data file, and by default the only data file, contains system tables and information on all files within a database. By default, this file has an .mdf extension. If there are no other files in the database, the primary file also contains user objects such as tables and indexes.

* Secondary data file —Secondary files, which usually have an .ndf extension, are optional files that can be added to a database for performance and/or administrative benefits, both of which we’ll cover shortly. A database can contain one or more secondary files.

* Filegroups —Every database contains a primary filegroup, containing at least the primary data file, and possibly all secondary data files unless other filegroups are created and used. Filegroups are logical containers that group together one or more data files, and as we’ll see later in the chapter, provide several benefits.

* Transaction log file —Typically using the .ldf extension, the transaction log file records details of each database modification and is used for various purposes, including transaction log shipping, replication, database mirroring, and recovery of a database to a consistent state. With these terms in mind, let’s cover some of the major file configuration recommendations, starting with separating a database’s different storage objects across separate physical disk volumes.

*** Volume separation
By default, a database is created with a single data and transaction log file. Unless specified during installation or modified during database creation, both of these files will be created in the same directory, with the default size and growth rates inherited from the model database.

As shown below, an important database file configuration task, particularly for databases with direct-attached storage, is to provide separate physical RAID-protected disk volumes for data, transaction log, tempdb, and backup files.

As designing SAN-based virtualized storage is quite different from designing direct-attached storage; that being said, the principles of high performance and fault tolerance remain. In both cases, a good understanding of SQL Server’s various storage objects is crucial in designing an appropriate storage system.


Unlike random access to data files, transaction logs are written sequentially. If a disk is dedicated to a single database’s transaction log, the disk heads can stay in position writing sequentially, thus increasing transaction throughput. In contrast, a disk that stores a combination of data and transaction logs won’t achieve the same levels of throughput given that the disk heads will be moving between the conflicting requirements of random data access/updates and sequential transaction log entries. For database applications with high transaction rates, separation of data and transaction logs in this manner is crucial.

A common (and recommended) backup technique, covered in detail in the next chapter, is to back up databases to disk files and archive the disk backup files to tape at a later point in the day. The most optimal method for doing this is to have dedicated disk(s) for the purpose of storing backups.

Dedicated backup disks provide several benefits:

** Disk protection —Consider a case where the database files and the backup files are on the same disk. Should the disk fail, both the database and the backups are lost, a disastrous situation! Storing backups on separate disk(s) prevents this situation from occurring—either the database or the backups will be available.

** Increased throughput —Substantial performance gains come from multiple disks working in unison. During backup, the disks storing the database data files are dedicated to reading the files, and the backup disks are dedicated to writing backup file(s). In contrast, having both the data and backup files on the same disk will substantially slow the backup process.

** Cost-effective —The backup disks may be lower-cost, higher-capacity SATA disks, with the data disks being more expensive, RAID-protected SCSI or SAS disks.

** Containing growth —The last thing you want is a situation where a backup consumes all the space on the data disk, effectively stopping the database from being used. Having dedicated backup disks prevents this problem from occurring.

Depending on the database usage profile, the tempdb database may come in for intense and sustained usage. By providing dedicated disks for tempdb, the impact on other databases will be reduced while increasing performance for databases heavily reliant on it.

SQL data files shouldn’t be located on the same disks as Windows system and program files. The best way of ensuring this is to provide dedicated disks for SQL Server data, log, backups and tempdb.

Note: Mount points are fully supported in Windows Server 2003 and 2008.

BI Informatica & SQL
Microsoft Certified Solutions Associate (MCSA)

Career in Crime?

Career in Crime?








Personally, i prefer to go for government: 






Dad has only three fingers!


Get started with free resources to help you learn the basics about Windows Server 2012. Then, select the certification path for your experience. As you move along the path, choose the type of training that fits your learning style and schedule and prepare for your first—or next—exam to reach your certification and career goal.

(click the image to view the full-sized interactive graphic)

Windows Server Certification and Career Paths

Microsoft Learning Partners around the world are hosting technology launch events covering new product features for newly released technologies – starting with Windows Server 2012. Take advantage of this opportunity to learn about the hundreds of new features and enhancements spanning virtualization, networking, storage, user experience, cloud computing, automation, and more. You’ll also get the scoop on the latest Microsoft training and certification offering to help you and your team get skilled and ready for the latest advantages Windows Server 2012 has to offer.

Sign-up for an event near you at:

For a limited time, you can also earn points to receive prizes, including a travel package to attend any Microsoft TechEd on the planet! Click here to get a head start on the competition:

The Microsoft Virtual Academy (MVA) team is excited to open registration for two new courses in a three-Jump Start series focused on virtualization. These one-day live online events are designed for IT Pros experienced in virtualization but interested in learning how to better leverage the Windows Server 2012 platform, Hyper-V and System Center 2012 SP1. All three courses will leverage a team-teaching approach to deliver an engaging, demo-rich, live learning experience.

REGISTER NOW for each of these great (FREE) sessions!

%d bloggers like this: