Improve Write Performance of your SQL Server Database

Though it’s true that having fast performing disk (15K rpm or SSD) is good for IO intensive applications but in a real scenario, it’s not enough.

In my 5 years experience with an organization where we had DELL storage of 10 TB with approx 20 disks (600 GB each) of 15K rpm but yet we were facing alert, PAGEIOLATCH_SH, PAGEIOLATCH_EX & Disk Wait Queue.

The alert was:

SQL Server has encountered  x occurrence(s) of I/O requests taking longer than 15 seconds to complete on file […..]. The OS file handle is 0x000006A4. The offset of the latest long I/O is: 0x00000.”

Because everything was configured correctly at database end, I connected to IT-Infra and they connected to DELL support and fortunately, they found it was  a firmware issue.

Apart from that, we are having best performing disk there are several things that we need to consider when we prepare our database for OLTP environment.

  1. Choose storage with proper RAID configuration for each data file.
    1. Put your data file on RAID 5 disk for data safety (You can alose use RAID 10 If you are using storage mirror solutions)
    2. And Transaction Log (T-Log) file on RAID 10 for better write performance
  2. Because application writes data to disk, disk performance plays a great role when it comes to OLTP application. So, proper disk configuration is required.Space on disk (HDD) are divided into 512 bytes sector and 4KB (default) cluster size contains 8 sectors. But the size of SQL Server data pages is 8KB. So, formatting cluster size into 8KB for .mdf and 64KB for .ldf will boost write performance.
  3. Use Windows/Domain service account for SQL Server service and grant it “Performance Volume Maintenance Task” on the server. It skips 0 (zero) initialization of your database when you restart SQL Server service. It does not apply for T-Log file as it’s always 0 (zero) initiated on SQL Server.
  4. Normally DBAs left there database’s data file with auto grows configuration which adds an extra work to SQL Serve engine to allocate space on disk at run/write time. So it’s better to keep your data file fixed sized and configure auto grow in MB (fixed growth) instead of percentage (dynamic growth). Good  practice is to calculate database growth per month/year and allocate the space to the data file (mdf/ndf) that it will require in next month/year.
  5. This is very important part of database configuration is T-Log sizing. Before we go ahead with Transaction Log File size we should know something about it.
    1. Transaction Log File consists of multiple VLF (Virtual Log File).
    2. It contains active transactions (Not committed)
    3. Number and Size of VLF is determined by initial size and auto growth type (dynamic or fixed size) we select when we create a database (it can be edited).
    4. By default initial size of transaction log file is 1 MB and auto growth in 10%.
    5. Now it’s time to configure transaction log file of our database. Before configuring it we should know that having too many and very few VLF in big size impact database performance. So, here is the formula being used to calculate the number of VLF according to Transaction Log File size.
      1. If you have T-Log with initial size 1MB and auto-growth set to 10% or 1MB, you will have 4 VLF (3 of 248KB and 1 with 272KB) by default. If an active transaction exceeds the limit of 4 VLV SQL Server adds VLF with 256KB.
      2. T-Log with initial size above 1MB and below 64MB creates 4 VLF (Initial Size/4).
      3. T-Log with initial size above 64MB and below 1GB creates 8 VLF (Initial Size/8).
      4. T-Log with initial size above 1GB and below 16GB creates 16 VLF (Initial Size/16).
  6. Choose proper recovery model according to RPO & RTO
    1. FULL : Writes each & every transaction in transaction log file so the number of writes increases on disk. But this is only the recovery model with support Point-in-Time recovery of a database if something goes wrong.
    2. BULK_LOGGED : It records the allocation units that get changed by transaction into transaction log file. So the number of writes would be lower with comparison to FULL recovery model. Point to remember is, ” it does not support Point-in-Time Recovery.
    3. SIMPLE : It writes every transaction to transaction log file joust like full recovery model. But the difference is, “after the transaction is committed, it’s marked as inactive by SQL Server and get overwritten by next transaction. So, T-Log size grows according to active transaction size.
  7. Create minimum indexes on the write intensive tables.
  8. Don’t create Clustered index on Identity (Incremental Valued) column. Instead, create clustered index on CVAR/VARCHAR type column and adjust FILLFACTOR according to performance (To decide FILLFACTORwe have to do POC as it differs on various environment and there is no formula to calculate it)
  9. Use rowlock/updlock hint with INSERT, UPDATE & DELETE query.
  10. Don’t place data files in compressed drives.
  11. Exclude SQL Server database file (.mdf, .ndf & .ldf) from Anti Virus scan.
  12. Use “Disk Defragmenter” periodically.

I hope my experience will save DBAs time and they would have good performing database.

Thanks

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

*

code