I have been asked to build a specification for a Server Install for SQL Server 2008 and it is to make use of SAN Storage.
With a server connected to Disks (Connected to the server or Disk Array) I would set it up as
C: – OS
D: – MSSQL Binaries
F: – System DB’s
G: – tempdb
I: – Data files
T: – Tranlogs
V: – Backups and Maintenance Logs
Normally would try to have them write to diff drives.
However with SAN Storage I am not able to dictate the set up as I requested the logical drive letters be housed on different spindles but been told to work with what I have. They want to give the space and not say if it is the same or different spindles. What I would like to know then does it make sense to keep the logical drives separation when they will all be on the same spindle or just best to have the number of logical drives to the number of spindles they are on?
|When you’re deploying a new SQL Server to SAN, the DBA should work with the SAN Admin to determine the optimum HBA Queue Purge setting. SQLCAT recommends 64 or even 128, but in most shops, I’ve found that the default of 32 is used. Also make sure u do a multipath failover test too. LUNS should not be shared with other apps.|
Update Host Bus Adapter (HBA) Firmware
• The DBA must confirm with the SAN Administrator that the HBA firmware is current.
Generally, these updates can’t be done online while the server accesses data, so it’s better to get the code up to date before the box goes into production.
• The DBA must confirm with the SAN Administrator that the multipathing SAN software is being used to ensure SQL Server can always communicate with its disk drives.
A dataserver has to be able to access its drives at all times. When hard drives were directly attached to servers, this wasn’t a big risk, but storage area networks bring in a lot of risky factors. Cables get unplugged or get bent beyond repair. Switches fail. Network configurations don’t go according to plan.
Multipathing software mitigates this risk by enabling the SAN admin to set up multiple routes between a server and its drives. The multipathing software handles all IO requests, passes them through the best possible path, and takes care of business if one of the paths dies.
In the event of a problem like an unplugged cable, the multipathing software will sense that IO has taken too long, then reset the connections and pass the request over an alternate path. The application (like SQL Server) won’t know anything went wrong, but the IO request will take longer than usual to perform. Sometimes in SQL Server, this shows up as an application-level alert that IO has taken more than 15 seconds to complete.
Multipathing and Failover Testing
• The DBA must conduct a basic multipathing failover test with the SAN Administrator to ensure that SQL Server operations can continue in the event of a network configuration error when SQL Server communicates with its disk drives.
• The DBA should conduct a SAN zone multipathing failover test with the SAN Administrator to ensure that SQL Server operations can continue in the event of a network configuration error when SQL Server communicates with its disk drives.
Start a huge file copy to each array, and do them all simultaneously. Go into the datacenter and pull one fiber cable out. Watch to make sure the file copy continues. Some SAN drivers will take 10-15 seconds to fail over, but the file copies should not be disrupted. If they’re disrupted, the multipathing didn’t work. Then plug the cable back in, and pull another cable. Again, the file copy should continue. Finally, while a file copy is running, ask the SAN admin to disable one of the SAN zones for the server – that way, the fiber cable will still be lit up, but the path to the storage will be gone. (This is a tougher failover method than just pulling the fiber.)
HBA Queue Depth Optimisation
• The DBA must work with the SAN Administrator to conduct a SQL Server tuning exercise which will determine how close we can come to implementing the SQLCAT recommendations for SQL Server HBA Queue Depth without prohibitively impairing the performance of other SAN consumers.
The HBA is used to connect your server to the SAN. The HBA Queue depth setting throttles the maximum amount of I/O operations that can simultaneously flow to the SAN from the HBA port. How you modify this setting depends on the brand and model of your HBA. If the queue depth setting value is too low, your SQL Server instance I/O throughput can suffer.
SQL Server DBAs should work with their company’s SAN administrator to make sure you find a healthy balance between your SQL Server instance’s overall I/O throughput and the SAN’s overall capacity.
The SQL Server Customer Advisory Team (SQLCAT) recommends increasing it from a typical default of 32 to 64 or even 128 for SQL Servers. The caveat is that it increases the load on the SAN so if it’s a shared SAN you might affect the performance for the other consumers. Testing will therefore be required.
• The DBA should work with the SAN Administrator to ensure that SQL Server disk drives reside on a dedicated LUN.
Also try to ensure that your log files are on a separate LUN consisting of a dedicated disk. Log files typically are written sequential patterns, unlike data files. Having a LUN with drives shared with another application will not provide optimal IO performance. Your SAN administrator may not permit you to dedicate a separate disk or set of disks to your log.
• The DBA must work with the SAN Administrator to establish that the SAN cache will be sufficient for the SQL Server deployment at hand.
It is also possible that some file or database operations (like a checkpoint) can saturate the cache resulting in degraded read and write performance. Do benchmarking with your SAN vendor to ensure that your SAN cache will be adequate to provide optimal database performance.
Drive Layout Configuration
• For medium usage instances, the following drive configuration should be used –
User Database Data F:MSSQL<Instance name>Data
User Database Logs G:MSSQL<Instance name>Logs
TempDB Data and Logs H:MSSQL<Instance name>TempDB
• For heavy usage instances, the following drive configuration should be used –
User Database Data F:MSSQL<Instance name>Data
User Database Logs G:MSSQL<Instance name>Logs
TempDB Data H:MSSQL<Instance name>TempDBData
TempDB Logs I:MSSQL<Instance name>TempDBLogs
• The DBA should work with the SAN Administrator to ensure that a RAID and file separation strategy is chosen that is appropriate for the size of SQL Server instance being deployed.
The small database server. This would be a database system having a small number of users and a low number of transactions per second. For a system of this type, putting the data files, transaction logs and tempdb database on a single RAID 5 array will be fine. With a load this low, there should be minimal to no performance loss because of having the files on the same array.
The medium database server. This would be a system that has a larger number of users and no more than 100 transactions per second. At this system size, you will begin to separate parts of the database. Start by separating the database files and log files onto different RAID arrays. The tempdb database can probably remain on the same array as the database files. But, if the tempdb database is heavily used for things like lots of temporary table usage, it should be moved to its own drive array.
The large database server. This system is for hundreds of users and hundreds or thousands of transactions per second. When designing systems of this size, break apart not only the database files, log files and tempdb database files, but also want database into smaller data files on multiple drives. This includes moving the indexes to separate files on RAID 1 or RAID 5 arrays, moving blob data (data stored in TEXT, NTEXT, IMAGE, VARCHAR(MAX) and NVARCHAR(MAX) data types) to a separate RAID 5 array. You can also place different types of tables on different drives by assigning them to different file groups.
It is recommended that for larger systems, each file group of the database should have between .25 to one physical file per physical CPU core in the server. The tempdb database should have one data file per physical CPU core in the server.
Most databases are high read and low write and, usually, database files are placed on RAID 5 arrays, which give good disk performance at a reasonable cost. However, because the tempdb is a very high-write database, a RAID 5 array isn’t the proper place for it. You should put the tempdb on either a RAID 1 or RAID 10 array as they’re optimized for high-write applications. If you can afford additional RAID 1 or RAID 10 arrays for each physical database file for the tempdb,
you’ll get increased performance.
For best performance, put the tempdb (both data and logs) on its own RAID array or arrays that are not shared with any other parts of the database or system. This includes having other LUNs on the same SAN drives as other LUNs, and it’s especially important because if you put other LUNs on the same spindles as the tempdb LUN, it places additional load on those disks. That makes it extremely difficult to troubleshoot from the server side because the server doesn’t know about the extra I/O on the disks.
Tiered Storage – Select Appropriately
• The DBA should work with the SAN Administrator to ensure that the tier of SAN selected for hosting SQL Server operational files is appropriate for the business requirements that the deployment aims to meet.
Tier 1 Storage
For business-critical 24×7 databases; file servers and email applications; and data warehouses; a redundant, cache-based tiered storage model—called tier 1 storage—is the best option. The tier 1 storage model offers quick response time and fast data transfer rates. As such, tier 1 storage is a great solution for companies that need to effectively store high performance data that demands high availability.
Tier 2 Storage
For seldom-used, non-critical databases—historical data for instance—a tier 2 storage model is a great option since tier 2 data can generally be stored on less expensive media in storage area networks (SAN). Tier 2 storage is a good solution for organizations that have a large amount of data that doesn’t require 24×7 availability or extensive backup. Tier 2 storage can also help reduce hardware costs and management overhead.
Tier 3 Storage
For rarely accessed data, a tier 3 storage model offers further economies of scale since data can be stored on even less expensive media such as recordable compact discs. Tier 3 storage is a convenient and simple way for IT administrators to protect large amounts of non-critical data from fire, theft, and computer malfunctions.