If you’ve ever went through the process of installing SQL Server, you’ll know that it’s very simple. Provide an admin password, pick your directories and viola, server is installed. The downside of this simplistic setup is that many configuration options are at their default values. 99 times out of a 100, this will not be ideal for your setup.
Most of the properties can be changed after the installation of SQL Server using SSMS (SQL Server Management Studio). If you’d like to make these changes through T-SQL, there’s a stored procedure called sp_configure that can do this for you.
What is sp_configure?
Sp_configure is a stored procedure that allows you to alter global server configuration either temporarily or permanently (sticks after server restart).
If we simply execute sp_configure on our server, we’ll get an output like this:
We get to see the name of each setting, the allowed minimum and maximum values, config_value and run_value. While name and min/max values are pretty self-explanatory, the config_value tells us the configured start-up value. This value persists after server reboots. The run_value suggests a setting which has been changed only for the running session. If the SQL service was restarted, this setting would default back to config_value.
Simply running sp_configure shows us all the available settings. What if we wanted to see values just for a specific setting? We can do that easily by providing the name, that’s if you can remember off top of your head:
exec sp_configure ‘name of the setting’
To change the configuration value, you would run the sp_configure command like this:
exec sp_configure ‘name of the setting’, numeric_value
Reconfigure statement acts as the “OK” button. If you forget to run it, the results window will remind you that you need to run it in order for changes to take effect.
As you can see in the first screenshot, there seem to be 17 different settings we can modify. Turns out, there are a lot more than that. On SQL 2012, there are 67 different advanced options. In order to see them all, we have to turn on the advanced option setting. Here’s how we do that:
exec sp_configure ‘show advanced options’, 1
If we re-run sp_configure, we’ll be able to see a lot more options.
Most Common Options – Part 1
This is where Part 1 of this article comes into play. Because there are so many settings, I’ll focus on a couple that you should know since they might come in handy in the future. Let’s go down the list and talk about the most important options:
Backup compression default
This option has been around for a while. Since SQL 2008R2, it has been available to even Standard editions, before that it was Enterprise only. It’s a great feature that helps keep your backup files much smaller than without compression. You can expect anywhere between 30-70% compression depending on your database. By default, this feature is turned off. There’s no good reason to keep this option OFF, I highly recommend turning it on if you’re 2008R2 or higher. However, if you have some kind of compression or de-duplication tool you use, they don’t like compressed databases. You shouldn’t turn this feature on if you use those kinds of tools. Syntax to turn on the backup compression looks like this:
exec sp_configure ‘backup compression default’, 1
Blocked process threshold (s)
As the name suggests, this value is expressed in seconds. What it does is it sets up a threshold level in seconds after which a report will be triggered. These reports are based on blocking events. In other words, if anyone is getting blocked for more than set value of seconds, a report will be generated. These reports can be traced using SQL Server Profiler. Syntax to turn on 20 second threshold would look like this:
exec sp_configure ‘blocked process threshold (s)’, 20
If you need to change this value up or down from a previously set value, just make sure to re-run the RECONFIGURE statement in order for it to take effect.
This is a very useful setting for troubleshooting. If you believe you’re experiencing blocking on the server, this is a good place to start your troubleshooting. When you decide to turn off the report, simply set the configuration value back to 0 (default).
Cost threshold for parallelism AND Max degree of parallelism
I’m listing these two settings together because they’re both related to how SQL server treats parallelism and parallel queries. By default, you can expect the Max Degree setting to be at 0 and Cost Threshold to be at 5. Max Degree should be set based on the number of processors. Zero means split the work into as many lines as the number of processors available to the SQL Server. A defined value means using a specific number every time. There are performance reasons associated with limiting your Max Degrees setting. There are plenty of discussions as well as blogs and even chapters written on this setting. To keep it simple, try to match the setting with the number of processors but don’t go over 8.
As for the Cost Threshold, it represents a breaking point at which the SQL Server will decide whether or not to run the query in parallel. The default setting of 5, suggests that any query with estimated cost of 5 or higher will run in parallel. Well, 5 is simply too low of an value. This may have had some merit in the ’80s or 90s when the cost metric was first introduced, but not anymore. Setting it to something like 25 is much better, while busy servers may even benefit from 50 or higher. This value is expressed as cost of the query. You want to use parallelism only for your most expensive query. Cost of 5 would include many in-expensive queries into the parallel processing and thus actually have a negative performance impact.
To make changes to both of these settings simultaneously, you would run the following command:
exec sp_configure ‘cost threshold for parallelism’, 25 –increased cost threshold
exec sp_configure ‘max degree of parallelism’, 4 –based on 4 processors
Database Mail XPs
Database mail is a feature built into the SQL server which can send out emails if an Agent Job was to fail. It works by talking to your SMTP server and sending out messages to Operators based on the configuration inside of each Job as well as Operator configuration in SQL Agent. It is also OFF by default.
I highly recommend configuring this option if you have no means of checking if a job has failed or not. It’s best to know when a job fails rather than to find out later when you log into SSMS and realize it’s been failing for weeks.
You can even setup the Database Mail to work Google Mail if you’d like. There will be an article on that shortly. Advanced configuration can be accessed through SSMS Object Explorer, but for starters, you can enable it with the following command:
exec sp_configure ‘Database Mail XPs’, 1
Fill factor (%)
Fill factor is a setting that dictates how much empty space will be reserved in each page that SQL Server creates. By default this option is 0, which means no empty space. The problem this creates is that first time data is entered, page splitting will occur as there’s nowhere for the new data to go once the page is completely full.
This global setting is applied to all new indexes. You can override server setting by specifying fill factor in your rebuild command. You’d want to be careful with this setting as it can actually hurt performance if used incorrectly. There are a handful of caveats regarding fill factor which are beyond the scope of this basic blog, in some cases the Fill Factor value does not apply, such as with Tables (heaps). You can start with a safer value such as 90-95% percent and work your way down from there.
exec sp_configure ‘fill factor (%)’, 90
Min server memory (MB) AND Max server memory (MB)
These two options usually go together and their names are pretty self-explanatory. They are roughly, minimum and maximum amount of memory SQL Server is supposed to use. How they actually work is a bit more complex, but in a nutshell that is the simplest explanation.
Max memory is generally more important of the two, because it limits the amount of RAM SQL Server can take. The default value is huge, 2147483647 megabytes to be exact. If you left this default value, SQL Server would eventually consume the entire available RAM and starve the Operating System.
Generally, you’d want to leave about 4GB of RAM for your OS, plus 1-2GB for every other service besides SQL Server, such as Reporting Services, Active Directory, etc. If you’re SQL Server has more than 1 role, it’s important to account for those other services when configuring memory options.
Min Memory setting works a little bit differently than max, which just suggests an upper limit. Min memory isn’t necessarily the lower limit. To understand how Min Memory setting works, we have to look at the following scenario: Suppose you set your Min Memory value to 10GB. SQL Server as it runs can take memory from the OS Buffer but it can also give it back when it’s done using it. If our SQL Server is consuming very little RAM, for example 5GB, then our setting has no effect. But let’s say there was a busy day at the office, and the SQL ramped up memory usage to 15GB. Everyone went home and everything is settling down. SQL Server can theoretically only give back 5GB of RAM before it reaches the minimum limit of 10GB. This is why sometimes setting of 0 doesn’t work well because it would suggest that there’s no limit on the amount of memory returned. If something else takes up all the memory SQL returned, then SQL would no longer have that memory when things get busy again.
In general, these examples are rare and you often do not have to worry about Min Memory setting as much.
Now let’s configure the limits. Let’s say that you have a total of 32GB of RAM on your server. In order to determine Max Memory setting, we’ll subtract 4GB we’d like to leave for the OS. Let’s say that we also run Reporting Services, SSMS and a few other services/programs which could take up another 2-4GB of RAM. This means our ideal max memory setting is around 24-26GB. We can always increase this value so it’s best to start with 24GB. Min memory value we can set to something small, like 6GB, just so it’s not at 0.
exec sp_configure ‘max server memory (MB)’, 24000
exec sp_configure ‘min server memory (MB)’, 6000
Word of caution on adjusting memory settings, especially Max Value. Just like you can increase the value, you can also decrease it. Be careful when decreasing the memory as it can have some negative side effects. SQL Server may need to flush the entire Plan Cache in order to make that memory adjustment. It’s important to be prepared for this before you make the change, as things may get really slow for everyone after the change is made.
This concludes Part 1 of the most common SP_CONFIGURE options, we’ll cover some more in Part 2. Stay tuned!