Skip to content

Best practices with Query Store

    This outlines the best practices for using SQL Server Query Store with your workload.

    Use the latest SQL Server Management Studio

    SQL Server Management Studio has a set of user interfaces designed for configuring Query Store and for consuming collected data about your workload.

    Use Query Performance Insight in Azure SQL Database

    If you run Query Store in Azure SQL Database, you can use Query Performance Insight to analyze resource consumption over time. While you can use Management Studio and Azure Data Studio to get detailed resource consumption for all your queries, such as CPU, memory, and I/O, Query Performance Insight gives you a quick and efficient way to determine their impact on overall DTU consumption for your database.

    This section describes optimal configuration defaults that are designed to ensure reliable operation of the Query Store and dependent features. Default configuration is optimized for continuous data collection, that is minimal time spent in OFF/READ_ONLY states.

    USE QUERY PERFORMANCE INSIGHT IN AZURE SQL DATABASE
    ConfigurationDescriptionDefaultComment
    MAX_STORAGE_SIZE_MBSpecifies the limit for the data space that Query Store can take inside the customer database100Enforced for new databases
    INTERVAL_LENGTH_MINUTESDefines size of time window during which collected runtime statistics for query plans are aggregated and persisted. Every active query plan has at most one row for a period of time defined with this configuration60Enforced for new databases
    STALE_QUERY_THRESHOLD_DAYSTime-based cleanup policy that controls the retention period of persisted runtime statistics and inactive queries30Enforced for new databases and databases with previous default (367)
    SIZE_BASED_CLEANUP_MODESpecifies whether automatic data cleanup takes place when Query Store data size approaches the limitAUTOEnforced for all databases
    QUERY_CAPTURE_MODESpecifies whether all queries or only a subset of queries are trackedAUTOEnforced for all databases
    FLUSH_INTERVAL_SECONDSSpecifies maximum period during which captured runtime statistics are kept in memory, before flushing to disk900Enforced for new databases

    These defaults are automatically applied in the final stage of Query Store activation in all Azure SQL Database. After it’s enabled, Azure SQL Database won’t change configuration values that are set by customers, unless they negatively impact primary workload or reliable operations of the Query Store.

    Use Query Store with Elastic Pool databases

    You can use Query Store in all databases without concerns, in even densely packed pools. All issues related to excessive resource usage that might have occurred when Query Store was enabled for the large number of databases in the elastic pools have been resolved.

    Keep Query Store adjusted to your workload

    Configure Query Store based on your workload and performance troubleshooting requirements. The default parameters are good enough to start, but you should monitor how Query Store behaves over time and adjust its configuration accordingly.

    Here are guidelines to follow for setting parameter values:

    Max Size (MB): Specifies the limit for the data space that Query Store takes inside your database. This is the most important setting that directly affects the operation mode of Query Store.

    While Query Store collects queries, execution plans, and statistics, its size in the database grows until this limit is reached. When that happens, Query Store automatically changes the operation mode to read-only and stops collecting new data, which means that your performance analysis is no longer accurate.

    The default value in SQL Server 2016 (13.x) and SQL Server 2017 (14.x) is 100 MB. This size might not be sufficient if your workload generates a large number of different queries and plans or if you want to keep query history for a longer period of time. Starting with SQL Server 2019 (15.x), the default value is 1 GB. Keep track of current space usage and increase the Max Size (MB) value to prevent Query Store from transitioning to read-only mode.

    The Max Size (MB) limit isn’t strictly enforced. Storage size is checked only when Query Store writes data to disk. This interval is set by the Data Flush Interval (Minutes) option. If Query Store has breached the maximum size limit between storage size checks, it transitions to read-only mode. If Size Based Cleanup Mode is enabled, the cleanup mechanism to enforce the maximum size limit is also triggered.

    Use Management Studio or execute the following script to get the latest information about Query Store size:

    SQL

    USE [QueryStoreDB];

    GO

    SELECT actual_state_desc, desired_state_desc, current_storage_size_mb,

     max_storage_size_mb, readonly_reason

    FROM sys.database_query_store_options;

    The following script sets a new value for Max Size (MB):

    SQL

    ALTER DATABASE [QueryStoreDB]

    SET QUERY_STORE (MAX_STORAGE_SIZE_MB = 1024);

    Data Flush Interval (Minutes): It defines the frequency to persist collected runtime statistics to disk. It’s expressed in minutes in the graphical user interface (GUI), but in Transact-SQL it’s expressed in seconds. The default is 900 seconds, which is 15 minutes in the graphical user interface. Consider using a higher value if your workload doesn’t generate a large number of different queries and plans, or if you can withstand longer time to persist data before a database shutdown.

    Use SQL Server Management Studio or Transact-SQL to set a different value for Data Flush Interval:

    SQL

    ALTER DATABASE [QueryStoreDB]

    SET QUERY_STORE (DATA_FLUSH_INTERVAL_SECONDS = 900);

    Statistics Collection Interval: Defines the level of granularity for the collected runtime statistic, expressed in minutes. The default is 60 minutes. Consider using a lower value if you require finer granularity or less time to detect and mitigate issues. Keep in mind that the value directly affects the size of Query Store data. Use SQL Server Management Studio or Transact-SQL to set a different value for Statistics Collection Interval:

    SQL

    ALTER DATABASE [QueryStoreDB]

    SET QUERY_STORE (INTERVAL_LENGTH_MINUTES = 60);

    Stale Query Threshold (Days): Time-based cleanup policy that controls the retention period of persisted runtime statistics and inactive queries, expressed in days. By default, Query Store is configured to keep the data for 30 days, which might be unnecessarily long for your scenario.

    Avoid keeping historical data that you don’t plan to use. This practice reduces changes to read-only status. The size of Query Store data and the time to detect and mitigate the issue will be more predictable. Use Management Studio or the following script to configure time-based cleanup policy:

    SQL

    ALTER DATABASE [QueryStoreDB]

    SET QUERY_STORE (CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 90));

    Size Based Cleanup Mode: Specifies whether automatic data cleanup takes place when Query Store data size approaches the limit. Activate size-based cleanup to make sure that Query Store always runs in read-write mode and collects the latest data.

    SQL

    ALTER DATABASE [QueryStoreDB]

    SET QUERY_STORE (SIZE_BASED_CLEANUP_MODE = AUTO);

    Query Store Capture Mode: Specifies the query capture policy for Query Store.

    • All: Captures all queries. This option is the default in SQL Server 2016 (13.x) and SQL Server 2017 (14.x).
    • Auto: Infrequent queries and queries with insignificant compile and execution duration are ignored. Thresholds for execution count, compile, and runtime duration are internally determined. Starting with SQL Server 2019 (15.x), this is the default option.
    • None: Query Store stops capturing new queries.
    • Custom: Allows additional control and the capability to fine-tune the data collection policy. The new custom settings define what happens during the internal capture policy time threshold. This is a time boundary during which the configurable conditions are evaluated and, if any are true, the query is eligible to be captured by Query Store.

    The following script sets QUERY_CAPTURE_MODE to AUTO:

    SQL

    ALTER DATABASE [QueryStoreDB]

    SET QUERY_STORE (QUERY_CAPTURE_MODE = AUTO);

    Examples

    The following example sets QUERY_CAPTURE_MODE to AUTO and sets other recommended options in SQL Server 2016 (13.x):

    SQL

    ALTER DATABASE [QueryStoreDB]

    SET QUERY_STORE = ON

        (

          OPERATION_MODE = READ_WRITE,

          CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = 90 ),

          DATA_FLUSH_INTERVAL_SECONDS = 900,

          QUERY_CAPTURE_MODE = AUTO,

          MAX_STORAGE_SIZE_MB = 1000,

          INTERVAL_LENGTH_MINUTES = 60

        );

    The following example sets QUERY_CAPTURE_MODE to AUTO and sets other recommended options in SQL Server 2017 (14.x) to include wait statistics:

    SQL

    ALTER DATABASE [QueryStoreDB]

    SET QUERY_STORE = ON

        (

          OPERATION_MODE = READ_WRITE,

          CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = 90 ),

          DATA_FLUSH_INTERVAL_SECONDS = 900,

          QUERY_CAPTURE_MODE = AUTO,

          MAX_STORAGE_SIZE_MB = 1000,

          INTERVAL_LENGTH_MINUTES = 60,

          SIZE_BASED_CLEANUP_MODE = AUTO,

          MAX_PLANS_PER_QUERY = 200,

          WAIT_STATS_CAPTURE_MODE = ON

        );

    The following example sets QUERY_CAPTURE_MODE to AUTO and sets other recommended options in SQL Server 2019 (15.x), and optionally sets the CUSTOM capture policy with its defaults, instead of the new default AUTO capture mode:

    SQL

    ALTER DATABASE [QueryStoreDB]

    SET QUERY_STORE = ON

        (

          OPERATION_MODE = READ_WRITE,

          CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = 90 ),

          DATA_FLUSH_INTERVAL_SECONDS = 900,

          MAX_STORAGE_SIZE_MB = 1000,

          INTERVAL_LENGTH_MINUTES = 60,

          SIZE_BASED_CLEANUP_MODE = AUTO,

          MAX_PLANS_PER_QUERY = 200,

          WAIT_STATS_CAPTURE_MODE = ON,

          QUERY_CAPTURE_MODE = CUSTOM,

          QUERY_CAPTURE_POLICY = (

            STALE_CAPTURE_POLICY_THRESHOLD = 24 HOURS,

            EXECUTION_COUNT = 30,

            TOTAL_COMPILE_CPU_TIME_MS = 1000,

            TOTAL_EXECUTION_CPU_TIME_MS = 100

          )

        );

    Start with query performance troubleshooting

    The troubleshooting workflow with Query Store is simple, as shown in the following diagram:

    Enable Query Store by using Management Studio, as described in the previous section, or execute the following Transact-SQL statement:

    SQL

    ALTER DATABASE [DatabaseOne] SET QUERY_STORE = ON;

    It takes some time until Query Store collects the data set that accurately represents your workload. Usually, one day is enough even for very complex workloads. However, you can start exploring the data and identify queries that need your attention immediately after you enable the feature. Go to the Query Store subfolder under the database node in Object Explorer of Management Studio to open troubleshooting views for specific scenarios.

    Management Studio Query Store views operate with the set of execution metrics, each expressed as any of the following statistic functions:

    SQL Server versionExecution metricStatistic function
    SQL Server 2016 (13.x)CPU time, Duration, Execution count, Logical reads, Logical writes, Memory consumption, Physical reads, CLR time, Degree of parallelism (DOP), and Row countAverage, Maximum, Minimum, Standard Deviation, Total
    SQL Server 2017 (14.x)CPU time, Duration, Execution count, Logical reads, Logical writes, Memory consumption, Physical reads, CLR time, Degree of parallelism, Row count, Log memory, TempDB memory, and Wait timesAverage, Maximum, Minimum, Standard Deviation, Total

    The following graphic shows how to locate Query Store views:

    The following table explains when to use each of the Query Store views:

    SQL Server Management Studio viewScenario
    Regressed QueriesPinpoint queries for which execution metrics have recently regressed (for example, changed to worse).
    Use this view to correlate observed performance problems in your application with the actual queries that need to be fixed or improved.
    Overall Resource ConsumptionAnalyze the total resource consumption for the database for any of the execution metrics.
    Use this view to identify resource patterns (daily vs. nightly workloads) and optimize overall consumption for your database.
    Top Resource Consuming QueriesChoose an execution metric of interest, and identify queries that had the most extreme values for a provided time interval.
    Use this view to focus your attention on the most relevant queries that have the biggest impact to database resource consumption.
    Queries With Forced PlansLists previously forced plans using Query Store.
    Use this view to quickly access all currently forced plans.
    Queries With High VariationAnalyze queries with high-execution variation as it relates to any of the available dimensions, such as Duration, CPU time, IO, and Memory usage, in the desired time interval.
    Use this view to identify queries with widely variant performance that can be affecting user experience across your applications.
    Query Wait StatisticsAnalyze wait categories that are most active in a database and which queries contribute most to the selected wait category.
    Use this view to analyze wait statistics and identify queries that might be affecting user experience across your applications.

    Applies to: Starting with SQL Server Management Studio v18.0 and SQL Server 2017 (14.x).
    Tracked QueriesTrack the execution of the most important queries in real time. Typically, you use this view when you have queries with forced plans and you want to make sure that query performance is stable.

    When you identify a query with suboptimal performance, your action depends on the nature of the problem.

    • If the query was executed with multiple plans and the last plan is significantly worse than the previous plan, you can use the plan forcing mechanism to force it. SQL Server tries to force the plan in the optimizer. If plan forcing fails, an XEvent is fired and the optimizer is instructed to optimize in the normal way.

    The previous graphic might feature different shapes for specific query plans, with the following meanings for each possible status:

    ShapeMeaning
    CircleQuery completed, which means that a regular execution successfully finished.
    SquareCancelled, which means that a client-initiated aborted execution.
    TriangleFailed, which means that an exception aborted execution.

    Also, the size of the shape reflects the query execution count within the specified time interval. The size increases with a higher number of executions.

    • You might conclude that your query is missing an index for optimal execution. This information is surfaced within the query execution plan. Create the missing index, and check the query performance by usingQuery Store.

    If you run your workload on SQL Database, sign up for SQL Database Index Advisor to automatically receive index recommendations.

    • In some cases, you might enforce statistic recompilation if you see that the difference between the estimated and the actual number of rows in the execution plan is significant.
    • Rewrite problematic queries, for example, to take advantage of query parameterization or to implement more optimal logic.

    Verify that Query Store collects query data continuously

    Query Store can silently change the operation mode. Regularly monitor the state of Query Store to ensure that Query Store is operating, and to take action to avoid failures due to preventable causes. Execute the following query to determine the operation mode and view the most relevant parameters:

    SQL

    USE [QueryStoreDB];

    GO

    SELECT actual_state_desc, desired_state_desc, current_storage_size_mb,

        max_storage_size_mb, readonly_reason, interval_length_minutes,

        stale_query_threshold_days, size_based_cleanup_mode_desc,

        query_capture_mode_desc

    FROM sys.database_query_store_options;

    The difference between the actual_state_desc and desired_state_desc indicates that a change of the operation mode occurred automatically. The most common change is for Query Store to silently switch to read-only mode. In extremely rare circumstances, Query Store can end up in the ERROR state because of internal errors.

    When the actual state is read-only, use the readonly_reason column to determine the root cause. Typically, you find that Query Store transitioned to read-only mode because the size quota was exceeded. In that case, the readonly_reason is set to 65536.

    Consider the following steps to switch Query Store to read-write mode and activate data collection:

    • Increase the maximum storage size by using the MAX_STORAGE_SIZE_MB option of ALTER DATABASE.
    • Clean up Query Store data by using the following statement:

    SQL

    ALTER DATABASE [QueryStoreDB] SET QUERY_STORE CLEAR;

    You can apply one or both of these steps by executing the following statement that explicitly changes the operation mode back to read-write:

    SQL

    ALTER DATABASE [QueryStoreDB]

    SET QUERY_STORE (OPERATION_MODE = READ_WRITE);

    Take the following steps to be proactive:

    • You can prevent silent changes of operation mode by applying best practices. Ensure that Query Store size is always below the maximally allowed value to dramatically reduce a chance of transitioning to read-only mode. Activate size-based policy as described in the Configure Query Store section so that Query Store automatically cleans data when the size approaches the limit.
    • To make sure that most recent data is retained, configure time-based policy to remove stale information regularly.
    • Finally, consider setting Query Store Capture Mode to Auto because it filters out queries that are usually less relevant for your workload.

    SQL

    ALTER DATABASE [QueryStoreDB]

    SET QUERY_STORE (OPERATION_MODE = READ_WRITE);

    GO

    SELECT actual_state_desc, desired_state_desc, current_storage_size_mb,

        max_storage_size_mb, readonly_reason, interval_length_minutes,

        stale_query_threshold_days, size_based_cleanup_mode_desc,

        query_capture_mode_desc

    FROM sys.database_query_store_options;

    If the problem persists, it indicates that corruption of Query Store data is persisted on the disk.

    Starting with SQL Server 2017 (14.x), Query Store can be recovered by executing the sp_query_store_consistency_check stored procedure within the affected database. Query Store must be disabled before you attempt the recovery operation. For SQL Server 2016 (13.x), you need to clear the data from Query Store as shown.

    If the recovery was unsuccessful, you can try clearing Query Store before you set the read-write mode.

    SQL

    ALTER DATABASE [QueryStoreDB]

    SET QUERY_STORE CLEAR;

    GO

    ALTER DATABASE [QueryStoreDB]

    SET QUERY_STORE (OPERATION_MODE = READ_WRITE);

    GO

    SELECT actual_state_desc, desired_state_desc, current_storage_size_mb,

        max_storage_size_mb, readonly_reason, interval_length_minutes,

        stale_query_threshold_days, size_based_cleanup_mode_desc,

        query_capture_mode_desc

    FROM sys.database_query_store_options;

    Set the optimal Query Store Capture Mode

    Keep the most relevant data in Query Store. The following table describes typical scenarios for each Query Store Capture Mode:

    SET THE OPTIMAL QUERY STORE CAPTURE MODE
    Query Store Capture ModeScenario
    AllAnalyze your workload thoroughly in terms of all queries’ shapes and their execution frequencies and other statistics.

    Identify new queries in your workload.

    Detect if ad-hoc queries are used to identify opportunities for user or auto parameterization.

    Note: This is the default capture mode in SQL Server 2016 (13.x) and SQL Server 2017 (14.x).
    AutoFocus your attention on relevant and actionable queries. An example is those queries that execute regularly or that have significant resource consumption.

    Note: Starting with SQL Server 2019 (15.x), this is the default capture mode.
    NoneYou’ve already captured the query set that you want to monitor in runtime and you want to eliminate the distractions that other queries might introduce.

    None is suitable for testing and benchmarking environments.

    None is also appropriate for software vendors who ship Query Store configuration configured to monitor their application workload.

    None should be used with caution because you might miss the opportunity to track and optimize important new queries. Avoid using None unless you have a specific scenario that requires it.
    CustomSQL Server 2019 (15.x) introduces a Custom capture mode under the ALTER DATABASE SET QUERY_STORE command. When enabled, additional Query Store configurations are available under a new Query Store capture policy setting to fine-tune data collection in a specific server.

    The new custom settings define what happens during the internal capture policy time threshold. This is a time boundary during which the configurable conditions are evaluated and, if any are true, the query is eligible to be captured by Query Store.

    Keep the most relevant data in Query Store

    Configure Query Store to contain only the relevant data so that it runs continuously and provides a great troubleshooting experience with a minimal impact on your regular workload. The following table provides best practices:

    KEEP THE MOST RELEVANT DATA IN QUERY STORE
    Best practiceSetting
    Limit retained historical data.Configure time-based policy to activate autocleanup.
    Filter out nonrelevant queries.Configure Query Store Capture Mode to Auto.
    Delete less relevant queries when the maximum size is reached.Activate size-based cleanup policy.

    Avoid using non-parameterized queries

    Using non-parameterized queries when that isn’t necessary isn’t a best practice. An example is in the case of ad-hoc analysis. Cached plans can’t be reused, which forces Query Optimizer to compile queries for every unique query text.

    Also, Query Store can rapidly exceed the size quota because of a potentially large number of different query texts and consequently a large number of different execution plans with similar shape. As a result, performance of your workload is suboptimal, and Query Store might switch to read-only mode or constantly delete data to try to keep up with the incoming queries.

    Consider the following options:

    • Parameterize queries where applicable. For example, wrap queries inside a stored procedure or sp_executesql.
    • Use the optimize for ad hoc workloads option if your workload contains many single-use ad-hoc batches with different query plans.
      • Compare the number of distinct query_hash values with the total number of entries in sys.query_store_query. If the ratio is close to 1, your ad-hoc workload generates different queries.
    • Apply forced parameterization for the database or for a subset of queries if the number of different query plans isn’t large.
      • Use a plan guide to force parameterization only for the selected query.
      • Configure forced parameterization by using the parameterization database option command, if there are a small number of different query plans in your workload. An example is when the ratio between the count of distinct query_hash and the total number of entries in sys.query_store_query is much less than 1.
    • Set QUERY_CAPTURE_MODE to AUTO to automatically filter out ad-hoc queries with small resource consumption.

    Avoid a DROP and CREATE pattern for containing objects

    Query Store associates query entry with a containing object, such as stored procedure, function, and trigger. When you re-create a containing object, a new query entry is generated for the same query text. This prevents you from tracking performance statistics for that query over time and using a plan forcing mechanism. To avoid this situation, use the ALTER <object> process to change a containing object definition whenever it’s possible.

    Check the status of forced plans regularly

    Plan forcing is a convenient mechanism to fix performance for the critical queries and make them more predictable. As with plan hints and plan guides, forcing a plan isn’t a guarantee that it will be used in future executions. Typically, when database schema changes in a way that objects referenced by the execution plan are altered or dropped, plan forcing starts failing. In that case, SQL Server falls back to query recompilation while the actual forcing failure reason is surfaced in sys.query_store_plan. The following query returns information about forced plans:

    SQL

    USE [QueryStoreDB];

    GO

    SELECT p.plan_id, p.query_id, q.object_id as containing_object_id,

        force_failure_count, last_force_failure_reason_desc

    FROM sys.query_store_plan AS p

    JOIN sys.query_store_query AS q on p.query_id = q.query_id

    WHERE is_forced_plan = 1;

    For a full list of reasons, see sys.query_store_plan. You can also use the query_store_plan_forcing_failed XEvent to track and troubleshoot plan forcing failures.

    Avoid renaming databases for queries with forced plans

    Execution plans reference objects by using three-part names like database.schema.object.

    If you rename a database, plan forcing fails, which causes recompilation in all subsequent query executions.

    Using Query Store in mission-critical servers

    The global trace flags 7745 and 7752 can be used to improve availability of databases by using Query Store.

    • Trace flag 7745 prevents the default behavior where Query Store writes data to disk before SQL Server can be shut down. This means that Query Store data that has been collected but not yet persisted to disk will be lost, up to the time window defined with DATA_FLUSH_INTERVAL_SECONDS.
    • Trace flag 7752 enables asynchronous load of Query Store. This allows a database to become online and queries to be executed before Query Store has been fully recovered. The default behavior is to do a synchronous load of Query Store. The default behavior prevents queries from executing before Query Store has been recovered but also prevents any queries from being missed in the data collection.

    If you’re using Query Store for just-in-time workload insights in SQL Server 2016 (13.x), plan to install the performance scalability improvements in SQL Server 2016 (13.x) SP2 CU2 (KB 4340759) as soon as possible. Without these improvements, when the database is under heavy workloads, spinlock contention may occur and server performance may become slow. In particular, you may see heavy contention on the QUERY_STORE_ASYNC_PERSIST spinlock or SPL_QUERY_STORE_STATS_COOKIE_CACHE spinlock. After this improvement is applied, Query Store will no longer cause spinlock contention.

    If you’re using Query Store for just-in-time workload insights in SQL Server (SQL Server 2016 (13.x) through SQL Server 2017 (14.x)), plan to install the performance scalability improvement in SQL Server 2016 (13.x) SP2 CU15, SQL Server 2017 (14.x) CU22, and SQL Server 2019 (15.x) CU8 as soon as possible. Without this improvement, when the database is under heavy ad-hoc workloads, the Query Store may use a large amount of memory and server performance may become slow. After this improvement is applied, Query Store imposes internal limits to the amount of memory its various components can use, and can automatically change the operation mode to read-only until enough memory has been returned to the Database Engine. Note that Query Store internal memory limits are not documented because they are subject to change.

    Performance Dashboard built-in

    With the new SSMS 17.2, we are releasing the Performance Dashboard embedded as a built-in Standard Report. This means that it is available for any SQL Server instance starting with SQL Server 2008, without any extra downloads or running any extra scripts. Just connect to your server in SSMS and open the Performance Dashboard.

    How to start it?

    As seen below, just bring up the Reports context menu at the server level, and select the Standard Reports.

    What to use it for?

    The Performance Report allows a DBA to quickly identify whether there is any current performance bottleneck on the connected SQL Server. And if a bottleneck is found, easily capture additional diagnostic data that may be necessary to resolve the problem. Some common performance problems where the Performance Dashboard can help to solve include:

    • CPU bottlenecks (and what queries are consuming the most CPU)
    • I/O bottlenecks (and what queries are performing the most IO)
    • Index recommendations generated by the query optimizer (missing indexes)
    • Blocking
    • Resource contention (including latch contention)

    See below an example where a CPU bottleneck is present:

    What has changed since the 2012 release?

    Besides being completely self-contained in SSMS, we have made a few changes to improve usability:

    • Under Historical Information section:
      • Added wait categorization to Waits report.
      • Filtered out all idle and sleep waits from Waits report.
      • Added new Latches report.
    • Under Miscellaneous Information, added a score column to the report. This report shows potential indexes that the query optimizer identified during query compilation. However, these recommendations should not be taken at face value.
      • We recommend that only the indexes with a score greater than 100,000 should be evaluated for creation, as those have the highest anticipated improvement for user queries.
      • Also, always evaluate if a new index suggestion is comparable to an existing index in the same table, where the same practical results can be achieved simply by changing an existing index instead of creating a new index. For example, given a new suggested index on columns C1, C2 and C3, first evaluate if there is an existing index over columns C1 and C2. If so, then it may be preferable to simply add column C3 to the existing index (preserving the order of pre-existing columns) to avoid creating a new index.

    What permissions are needed to run the report?

    This is aimed at sysadmin roles. Other roles require VIEW SERVER STATE and ALTER TRACE permissions.