Oracle Performance Tuning Tips
Oracle Performance Tuning is the process of administering a database to improve performance. Performance tuning in Oracle databases includes optimizing SQL statements and query execution plans so that the requests can be completed more efficiently.
The organization of a database the SQL statements used to access that data, determine the level of resources needed to respond to queries when an application communicates with the database.
Problems like poorly optimized SQL statements force the database to work much harder to retrieve information (resulting in more system resources being used). The more system resources that are used the greater the chance it will affect the experience of users on connected applications.
In an enterprise, users will report a slow application to a database administrator who will then attempt to pinpoint the root cause of the problem. The administrator analyzes statement code and searches for database bottlenecks. The process is extensive, as the administrator has to diagnose the root cause of the problem before it can be addressed.
Monitoring Performance in Oracle Databases: Response Time and Throughput
When performance tuning an Oracle database there are two metrics that are useful to measure:
- Response time – How long the database takes to complete a request.
- System throughput – The number of processes completed in a period of time.
High response time means that an application is providing a slow user experience. On the other hand, low system throughput means that the database only has the resources to manage a small number of tasks in a short time period. An administrator has to be able to know how they are trying to improve performance before tuning.
How you optimize an Oracle database comes down to your goals and the type of applications you are using. Many goals like having a fast response time or a high throughput are contradictory.
Tuning for fast response times may speed up individual queries made by users but sacrifice other tasks in the workload. In contrast, achieving a high throughput would aim to optimize the performance of the entire workload to support a larger output of transactions per second (but not necessarily speed up individual queries).
The type of application you’re using makes all the difference. If you’re using an online transaction process (OLTP) application then you would use throughput to measure performance. This is because of the high volume of transactions the application needs to manage.
However, if you were using a decision support system (DSS) with users running queries on everything from a handful of records to thousands of records, then you would measure performance by response time (unless you were supporting lots of users running concurrent queries)!
The Two Types of Tuning: Proactive Monitoring and Bottleneck Elimination
Now that you know what performance tuning is, it’s important to look at the two main models of tuning:
- Proactive Monitoring
- Bottleneck Elimination
Database administrators use these two models to manage performance issues and keep applications functioning at a high level.
Proactive monitoring is the process of monitoring a database to discover and address performance issues early rather than simply reacting when there is a problem. With proactive monitoring, administrators will periodically review databases to identify the signs of performance degradation.
The idea behind proactive monitoring is to catch issues and inefficiencies before they develop into greater problems further down the line. Some common issues database administrators look out for include:
- Database wait events – A high number of events can negatively affect database performance. Finding obstructive sessions and killing them can prevent performance degradation.
- Load average – Monitoring the load average of a server will tell you if server resources are functioning as normal. A high load average can result in slow database performance.
- Database sessions – Monitoring the number of active sessions can stop you from reaching the maximum (which will prevent you from being able to open new sessions).
However, monitoring proactively does carry some risk. Any changes an administrator makes can result in a decrease in performance for the database. Administrators can mitigate the risks by being cautious before making new changes.
Bottlenecks are one of the most common causes of poor performance. Bottlenecks block requests from reaching the destination and increase the response time of applications. Bottlenecks can be caused by a range of factors from badly coded SQL statements and high resource usage.
Bottleneck elimination is more of a reactive process than proactive monitoring. An administrator identifies a bottleneck and then finds a way to fix it. Fixing a bottleneck is a complex process and depends on what the root cause is (and whether it is internal or external). Recoding SQL statements is one solution for fixing internal bottlenecks, which should be addressed first.
Once internal bottlenecks have been resolved the administrator can start to look at external factors like CPU and storage performance that could be causing the problem. An administrator can choose between making changes to the application (or how it is used), Oracle, or the hardware configuration of the host.
How to Performance Tune
Performance tuning an Oracle database is a very complex subject because there are so many different factors that can affect database performance. To keep things simple, we’re going to look at some basic ways you can optimize performance.
1. Identify High-Cost Queries
The first step to tuning SQL code is to identify high-cost queries that consume excessive resources. Rather than optimizing every line of code it is more efficient to focus on the most widely-used SQL statements and have the largest database / I/O footprint.
One useful tool is Oracle SQL Analyze, which can identify resource-intensive SQL statements. Tuning these statements will give you the greatest return on your time investment.
2. Minimize the workload (Use Indexes!)
You can make the same query in many different ways so it is advantageous to write code that minimizes the workload as much as possible. If you only need a snapshot of data from a table it makes no sense processing thousands of rows you don’t need (all you’re doing is wasting system resources!) A full table scan takes up more database resources and I/O.
To eliminate the stress of sustaining a large workload you can use indexes to access small sets of rows rather than processing the entire database at once. Use indexes in those scenarios where a column is regularly queried.
3. Use Stateful Connections with Applications
Sometimes the cause of poor performance doesn’t come from code but because the connection keeps dropping between the application and the database. If your application isn’t configured correctly then it could form a connect to the database to access a table and then drop the connection once it has the information it needs.
Dropping the connection after accessing the table is terrible for performance. Instead, try to keep a stateful connection so that the application stays connected to the database at all times. Maintaining the connection will stop system resources from being wasted each time the application interacts with the database.
4. Collect and Store Optimizer Statistics
Optimizer statistics are data that describe a database and its objects. These statistics are used by the database to choose the best execution plan for SQL statements. Regularly collecting and storing optimizer statistics on database objects is essential for maintaining efficiency.
Collecting optimizer statistics makes sure that the database has accurate information on table contents. If the data is inaccurate then the database can choose a poor execution plan, which will affect the end-user experience. Oracle databases can automatically collect optimizer statistics or you can do so manually with the DBMS_STATS package.
5-step Methodology for Oracle Performance Tuning:
1 – Monitor Wait Times
Oracle provides wait events that allow you to understand the total time a query spends in the database. Start your tuning efforts with the queries taking the most time. You can use V$ Views to see real-time performance and session information. Although this information does not persist, Janis demonstrated a query that polls the V$ Views and creates a table of the data on sessions, wait events, blocking session IDs, etc. If you automate this query and poll every second, you can begin to see trends over time and can focus on the queries that take the longest time. If you are an Oracle Enterprise Edition user and own the Tuning and Diagnostic Packs, you can access the view called V$_Active_Session _History, which holds an hour of session data.
Recording baseline metrics – after all, you need a basis of comparison for performance improvement. Look at buffer gets to determine whether we are making progress in tuning. Tuning the query that requires the most buffer gets will often have a positive trickle-down effect on other queries.
2 – Review the Execution Plan
Oracle provides many ways to get execution plans. An explain plan is an estimate of what the query will do and can be wrong for many reasons. A cached plan will show you an actual plan that the Optimizer used. Knowing how to read these plans gives you insight into what the Optimizer is doing. Because the Optimizer continues to evolve with Oracle versions, you should become familiar with how it treats execution plans with each new release.
3 – Gather Object Information
As you drill into the poorly-performing query, take the time to look at the expensive objects. Examine table definitions and segment sizes. Look at statistics gathering – are the statistics current, or if they’re not, do they hurt the choices the Optimizer is making?
Take the time to review index definitions, existing keys and constraints. Be sure you understand the order of columns and column selectivity in the index. Also make sure the Optimizer can use the index.
4 – Find the Driving Table
You always want to know the size of the actual data sets for each step in the execution plan because your goal is to drive the query with the table that returns the least data. That reduces the number of logical reads. In short, you study Joins and Filtering predicates to filter the data earlier in the query rather than later.
Visualize the amount of data in the tables and which filter will return the least amount.
5 – Identify Performance Inhibitors
This is a useful list of unintentional performance inhibitors that should be removed from queries or avoided altogether:
- Cursor or row-by-row processing
- Parallel processing
- Nested views that use db_links
- Abuse of wildcards
- Using functions on indexed columns
- Hard-coded hints
- Complex expressions
- Joining complex views
And here are some actions you should take instead:
- Use bind variables instead of literal variables
- Use an index if less than 5% of data needs to be accessed
- Use equi-joins whenever possible
- Always use a WHERE clause