If you have ever worked with an SQL server, may it be as a sysadmin, DBA, or a developer, you might have noticed that SQL server often takes a lot of memory on the server.
Most of the time, we put a top limit on the SQL server because someone on the internet asked us to.
But why?
Why do we put a top limit there? And should we put a top limit to the amount of memory SQL server consumes?
To answer these questions and make an educated decision next time, we should know why SQL server utilizes so much memory.
Why is SQL server so memory intensive?
As with any other server application, one of the most important objectives of an SQL server is high performance. Consider a scenario where you are storing terabytes of data in your database.
How much optimization do you think SQL server would have to do to ensure that it does not become a bottleneck to your application’s performance?
Now indexing could be a solution to this – it is a way to increase the performance of the SQL server.
If you go through my previous post on indexing, you’ll realize that disk operations are bad for speed. They are very expensive in terms of resources and time.
So the solution is to utilize memory as much as possible to store data – more like a cache.
And this exactly is what SQL server does!
The most basic example for this is as follows
Consider that you have a query – a very long and complex one. As soon as you run that query, the SQL server will roll up its sleeves and get to work. It will create a SQL plan to execute the query. This plan will then be saved into memory, so that next time we don’t have to do all the processing again. This will save a lot of time.
Now, if you reduce the memory that SQL server can use, by setting up a ceiling value, it might not be able to store this SQL plan. This means that the next time the query is executed, we need to do the processing all over again. This sometimes becomes a problem for the CPU as well.
If reducing memory negatively impacts us, then why do it? Why not let the SQL server use memory freely?
The most obvious answer is because the OS needs memory too. Although this is true, it is not the main reason here. To understand why we limit it, let’s first understand how does the SQL server takes up this memory.
Dynamic memory management – How does SQL server allocate and deallocate memory?
Whenever we start the SQL server, it will take up a very small amount of memory for itself. Just like any good, but non-existent, application.
After that as load increases, it keeps on increasing the memory it acquires. And if in case the load decreases, it even frees some of the memory.
This process is called dynamic memory management.
During this whole time, the SQL server is continuously in contact with the OS. The OS tells the SQL server the amount of available memory.
This helps the SQL server ensure that it does not eat up the entire memory. It, therefore, prevents the starvation of OS, ultimately preventing the entire system from crashing.
Sometimes, it can take up more memory than it should and impact the memory requirements of OS. But it is quick to correct this error and ensure the stability of the entire system.
Hence, OS is not the main reason. Then what is?
We’ll find that out along the way, but first, let’s cover a few basics on controlling the memory allocation of the SQL server.
How can we control the memory that SQL server acquires?
SQL Server provides us with the option of min and max server memory. Using these options we can set the memory limit that the SQL server can use.
min server memory
Min server memory is used to set the minimum memory that a SQL server should have.
But a question worth mentioning here is that if SQL server takes its share of memory forcefully by dynamic memory allocation, why do we even need to set the min server memory?
Won’t SQL server just take as much memory as it needs?
The answer is yes it will take as much memory as it needs.
But what if we have multiple server applications running on the same server? Now it may be possible that some other application is using a lot of memory resources and the SQL server is not able to get its fair share.
This will impact the overall performance of the SQL server.
We, therefore, set a minimum memory that is sort of locked by the SQL server and no one else can use it.
Now let’s assume that the SQL server needs only 16 GB memory, but we reserved 32GB memory for it. Won’t that impact other applications?
Well even if it does not, its complete wastage of 16GB memory – more than what my laptop currently has :p
We might make this mistake, but the SQL server won’t!
It won’t take up the memory until and unless it feels that it needs that much memory. Let me explain.
Say for example your current workload only needs 8 GB memory and you have set min memory to 16 GB. Now SQL server will only acquire 8GB of memory.
Now say two hours later, the traffic is completely gone and so the SQL server needs only 4GB of memory. It should now deallocate some memory, right?
Yes – but it won’t.
Because SQL server now knows that anything up to 16 GB is rightfully its own, it won’t let go of any of the share that it acquires. Although it can’t acquire all of it till the time it doesn’t need it, but whatever it has, it won’t let it go.
After two more hours, traffic surged very high and now the SQL server consumed 20 GB RAM. As soon as the traffic is gone, it will start deallocating the memory, but won’t go below 16 GB
It will free 4GB of memory, but hold on to the remaining 16 GB as it is rightfully its own.
max server memory
Just like we can set the minimum memory SQL server can acquire, we can also set a top limit to the amount of memory that the SQL server can use.
We do this using the max server memory option.
Now no matter what happens and how much memory SQL server needs, it can’t ever go beyond this level. In some cases, this is good to have while in others it’s a must-have configuration.
Now let’s get back to our previous question, why do we need to apply a ceiling value to the memory that the SQL server uses?
I mean it has its own benefits – it improves performance and might sometimes impact OS processes, but that too for a moment or two.
Consider a scenario in which you have an SQL server and SSRS installed on the same server. If the SQL server uses all the memory, leaving only the bare minimum that OS needs, what do you think will happen when SSRS tries to start?
It won’t get enough memory!
This will cause a problem in SSRS initialization and SSRS might crash or won’t start. Now, this is an issue because our dear SQL server won’t even budge a single KB for SSRS.
So what do we do, we tell the SQL server that – okay, you take 100GB memory out of the total 128 GB and let the other 28 GB be available for OS and any other applications like SSRS.
One thing which you might notice is that if you set max memory to 101,000 MB, the SQL server will be using more memory than that, for example, 101,548 MB.
But why?
Because max memory puts a limit on the buffer pool – the largest single consumer of memory. But there are a few other processes and tasks in the SQL server which take a negligible amount of memory and are not contained by this constraint.
An important point to consider here is that min and max memory is always given in MBs.
A corner case!
If you are related to computers and programming, even the farthest way possible, you would know about the corner cases. These just come up out of nowhere and break everything.
What do you think would happen when you set min and max values to be equal?
That would be the end of dynamic memory allocation – our SQL server will now function only on that much memory for the rest of its life.
But – only if it reaches that point in the first place because min value will not take it to that much memory and so it will have room to grow and acquire more memory. Only until it reaches that one particular sweet spot