Yesterday I was called at a customer complaining about a performance issue. Sounds familiar?
The current activity showed me lots of processes waiting for resource semaphores.
The first line (session_id 600) caught my attention. This is a SQL maintenance job to check integrity of the system databases. Job history showed this process took a few seconds until about one month ago (which is normal), but runtimes started to increase until over 2 days during the last week.
The resource semaphores explained why end-users received time-outs but what I didn’t understand was that apparently there were no running processes that were consuming memory granted so other would have to wait to get their resource semaphores solved (granted_query_memory). More, there was over 6GB of available memory for resource_semaphore_id = 0.
So I went looking at the sys.dm_exec_query_memory_grants dmv and found out session_id 600 was the first candidate to get memory granted. But it requests for 1.7GB of memory, this sounds far too much to perform a CHECK TABLE in one of the system databases.
Doing some more memory investigation, only 620MB was allocated for the buffer cache. Which is far below expectations.
When looking at the memory clerks, I was expecting some CACHESTORE_xxxxxx counters but got some unexpected high values for MEMORYCLERK_SQLBUFFERPOOL and especially MEMORYCLERK_XE (5.6GB).
This behaviour is generated by extended events most likely created by Sharepoint monitoring and logging information into memory ring buffers. Disabling ring buffers or a correct configuration of ring buffers should solve this issue.
Wrong, this is a known issue with Sharepoint on SQL Server 2012 SP1. After installing SP2 en restarting the instance, the problem was solved. However, I suggested to keep on monitoring memory clerks for another few weeks.