Find the best practices and recommendations to maximize performance for database applications that connect to SQL Server on Linux. These recommendations are specific to running on the Linux platform. All normal SQL Server recommendations, such as index design, still apply.
The following guidelines contain recommendations for configuring both SQL Server and the Linux operating system.
SQL Server configuration
It is recommended to perform the following configuration tasks after you install SQL Server on Linux to achieve best performance for your application.
Use PROCESS AFFINITY for Node and/or CPUs
It is recommended to use ALTER SERVER CONFIGURATION to set PROCESS AFFINITY for all the
NUMANODEs and/or CPUs you are using for SQL Server (which is typically for all NODEs and CPUs) on a Linux Operating System. Processor affinity helps maintain efficient Linux and SQL Scheduling behavior. Using the
NUMANODE option is the simplest method. Note, you should use
PROCESS AFFINITY even if you have only a single NUMA Node on your computer.
Configure multiple tempdb data files
Because a SQL Server on Linux installation does not offer an option to configure multiple tempdb files, we recommend that you consider creating multiple tempdb data files after installation.
The following recommendations are optional configuration settings that you may choose to perform after installation of SQL Server on Linux. These choices are based on the requirements of your workload and configuration of your Linux Operating System.
Set a memory limit with mssql-conf
In order to ensure there is enough free physical memory for the Linux Operating System, the SQL Server process uses only 80% of the physical RAM by default. For some systems which large amount of physical RAM, 20% might be a significant number. For example, on a system with 1 TB of RAM, the default setting would leave around 200 GB of RAM unused. In this situation, you might want to configure the memory limit to a higher value. See the documentation on the
mssql-conf tool and the memory.memorylimitmb setting that controls the memory visible to SQL Server (in units of MB).
When changing this setting, be careful not to set this value too high. If you do not leave enough memory, you could experience problems with the Linux Operating System and other Linux applications.
Linux OS Configuration
Consider using the following Linux Operating System configuration settings to experience the best performance for a SQL Server Installation.
Kernel settings for high performance
These are the recommended Linux Operating System settings related to high performance and throughput for a SQL Server installation. See your Linux Operating System documentation for the process to configure these settings.
The following table provides recommendations for CPU settings:
|KERNEL SETTINGS FOR HIGH PERFORMANCE|
|CPU frequency governor||performance||See the |
|ENERGY_PERF_BIAS||performance||See the |
|min_perf_pct||100||See your documentation on intel p-state|
|C-States||C1 only||See your Linux or system documentation on how to ensure C-States is set to C1 only|
The following table provides recommendations for disk settings:
|disk readahead||4096||See the |
|sysctl settings||kernel.sched_min_granularity_ns = 10000000|
kernel.sched_wakeup_granularity_ns = 15000000
vm.dirty_ratio = 40
vm.dirty_background_ratio = 10
vm.swappiness = 10
|See the |
Kernel setting auto numa balancing for multi-node NUMA systems
If you install SQL Server on a multi-node
NUMA systems, the following
kernel.numa_balancing kernel setting is enabled by default. To allow SQL Server to operate at maximum efficiency on a
NUMA system, disable auto numa balancing on a multi-node NUMA system:
sysctl -w kernel.numa_balancing=0
Kernel settings for Virtual Address Space
The default setting of
vm.max_map_count (which is 65536) may not be high enough for a SQL Server installation. For this reason, change the
vm.max_map_count value to at least 262144 for a SQL Server deployment. The max value for vm.max_map_count is 2147483647.
sysctl -w vm.max_map_count=1600000
Proposed Linux settings using a tuned mssql profile
# A tuned configuration for SQL Server on Linux
summary=Optimize for Microsoft SQL Server
vm.swappiness = 1
vm.dirty_background_ratio = 3
vm.dirty_ratio = 80
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100
# For multi-instance SQL deployments, use
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
kernel.sched_latency_ns = 60000000
kernel.sched_migration_cost_ns = 500000
kernel.sched_min_granularity_ns = 15000000
kernel.sched_wakeup_granularity_ns = 2000000
To enable this tuned profile, save these definitions in a
tuned.conf file under a /usr/lib/tuned/mssql folder and enable the profile using
chmod +x /usr/lib/tuned/mssql/tuned.conf
tuned-adm profile mssql
Verify its enabling with
Disable last accessed date/time on file systems for SQL Server data and log files
noatime attribute with any file system that is used to store SQL Server data and log files. Refer to your Linux documentation on how to set this attribute.
Leave Transparent Huge Pages (THP) enabled
Most Linux installations should have this option on by default. We recommend for the most consistent performance experience to leave this configuration option enabled. However, in case of high memory paging activity in SQL Server deployments with multiple instances for example or SQL Server execution with other memory demanding applications on the server, we suggest testing your applications performance after executing the following command
echo madvise > /sys/kernel/mm/transparent_hugepage/enabled
or modifying the mssql tuned profile with the line
and make the mssql profile active after the modification
tuned-adm profile mssql
Ensure you have a properly configured swapfile to avoid any out of memory issues. Consult your Linux documentation for how to create and properly size a swapfile.
Virtual Machines and Dynamic Memory
If you are running SQL Server on Linux in a virtual machine, ensure you select options to fix the amount of memory reserved for the virtual machine. Do not use features like Hyper-V Dynamic Memory.
If you are a Linux user who is new to SQL Server, the following tasks walk you through some of the performance features. These are not unique or specific to Linux, but it helps to give you an idea of areas to investigate further. In each example, a link is provided to the depth documentation for that area.
Create a Columnstore Index
A columnstore index is a technology for storing and querying large stores of data in a columnar data format, called a columnstore.
- Add a Columnstore index to the SalesOrderDetail table by executing the following Transact-SQL commands:
CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_SalesOrderDetail_ColumnStore]
(UnitPrice, OrderQty, ProductID)
- Execute the following query that uses the Columnstore Index to scan the table:
SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice,
SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty
GROUP BY ProductID
ORDER BY ProductID
- Verify that the Columnstore Index was used by looking up the object_id for the Columnstore index and confirming that it appears in the usage stats for the SalesOrderDetail table:
SELECT * FROM sys.indexes WHERE name = ‘IX_SalesOrderDetail_ColumnStore’
WHERE database_id = DB_ID(‘AdventureWorks’)
AND object_id = OBJECT_ID(‘AdventureWorks.Sales.SalesOrderDetail’);
Use In-Memory OLTP
SQL Server provides In-Memory OLTP features that can greatly improve the performance of application systems. This section of the Evaluation Guide will walk you through the steps to create a memory-optimized table stored in memory and a natively compiled stored procedure that can access the table without needing to be compiled or interpreted.
Configure Database for In-Memory OLTP
- It’s recommended to set the database to a compatibility level of at least 130 to use In-Memory OLTP. Use the following query to check the current compatibility level of AdventureWorks:
FROM sys.databases as d
WHERE d.name = Db_Name();
If necessary, update the level to 130:
ALTER DATABASE CURRENT
SET COMPATIBILITY_LEVEL = 130;
- When a transaction involves both a disk-based table and a memory-optimized table, it’s essential that the memory-optimized portion of the transaction operate at the transaction isolation level named SNAPSHOT. To reliably enforce this level for memory-optimized tables in a cross-container transaction, execute the following:
ALTER DATABASE CURRENT SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON
- Before you can create a memory-optimized table you must first create a Memory Optimized FILEGROUP and a container for data files:
ALTER DATABASE AdventureWorks ADD FILEGROUP AdventureWorks_mod CONTAINS memory_optimized_data
ALTER DATABASE AdventureWorks ADD FILE (NAME=’AdventureWorks_mod’, FILENAME=’/var/opt/mssql/data/AdventureWorks_mod’) TO FILEGROUP AdventureWorks_mod
Create a Memory-Optimized Table
The primary store for memory-optimized tables is main memory and so unlike disk-based tables, data does not need to be read in from disk into memory buffers. To create a memory-optimized table, use the MEMORY_OPTIMIZED = ON clause.
- Execute the following query to create the memory-optimized table dbo.ShoppingCart. As a default, the data will be persisted on disk for durability purposes (Note that DURABILITY can also be set to persist the schema only).
CREATE TABLE dbo.ShoppingCart (
ShoppingCartId INT IDENTITY(1,1) PRIMARY KEY NONCLUSTERED,
UserId INT NOT NULL INDEX ix_UserId NONCLUSTERED HASH WITH (BUCKET_COUNT=1000000),
CreatedDate DATETIME2 NOT NULL,
) WITH (MEMORY_OPTIMIZED=ON)
- Insert some records into the table:
INSERT dbo.ShoppingCart VALUES (8798, SYSDATETIME(), NULL)
INSERT dbo.ShoppingCart VALUES (23, SYSDATETIME(), 45.4)
INSERT dbo.ShoppingCart VALUES (80, SYSDATETIME(), NULL)
INSERT dbo.ShoppingCart VALUES (342, SYSDATETIME(), 65.4)
Natively compiled Stored Procedure
SQL Server supports natively compiled stored procedures that access memory-optimized tables. The T-SQL statements are compiled to machine code and stored as native DLLs, enabling faster data access and more efficient query execution than traditional T-SQL. Stored procedures that are marked with NATIVE_COMPILATION are natively compiled.
- Execute the following script to create a natively compiled stored procedure that inserts a large number of records into the ShoppingCart table:
CREATE PROCEDURE dbo.usp_InsertSampleCarts @InsertCount int
WITH NATIVE_COMPILATION, SCHEMABINDING AS
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N’us_english’)
DECLARE @i int = 0
WHILE @i < @InsertCount
INSERT INTO dbo.ShoppingCart VALUES (1, SYSDATETIME() , NULL)
SET @i += 1
- Insert 1,000,000 rows:
EXEC usp_InsertSampleCarts 1000000
- Verify the rows have been inserted:
SELECT COUNT(*) FROM dbo.ShoppingCart
Learn More About In-Memory OLTP
Use Query Store
Query Store collects detailed performance information about queries, execution plans, and runtime statistics.
Query Store is not active by default and can be enabled with ALTER DATABASE:
ALTER DATABASE AdventureWorks SET QUERY_STORE = ON;
Run the following query to return information about queries and plans in the query store:
SELECT Txt.query_text_id, Txt.query_sql_text, Pl.plan_id, Qry.*
FROM sys.query_store_plan AS Pl
JOIN sys.query_store_query AS Qry
ON Pl.query_id = Qry.query_id
JOIN sys.query_store_query_text AS Txt
ON Qry.query_text_id = Txt.query_text_id ;
Query Dynamic Management Views
Dynamic management views return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance.
To query the dm_os_wait stats dynamic management view:
SELECT wait_type, wait_time_ms