SoFunction
Updated on 2025-04-13

A complete guide to slow SQL optimization methods in MySQL

1. The fatal impact of slow SQL

When the database response time exceeds 500ms, the system will face three major disaster chain reactions:

1. The user experience collapses

  • Page loading timeout rate rises by 37%
  • User bounce rate increased by 52%
  • Core business conversion rate drops by 29%

2. System stability crisis

  • Connection pool exhaustion risk increases by 4.8 times
  • Master-slave synchronization delay breaks through the 10-second threshold
  • Disk IO utilization rate exceeds 90% in the long term

3. Operation and maintenance costs soar

  • DBA fault handling time increased by 65%
  • The hardware expansion frequency is increased by 3 times
  • Night alarm volume surged by 80%

Real cases captured through the monitoring system: An e-commerce platform lost 23 orders per second due to unoptimized GROUP BY statements during the promotion period, and the direct economic loss exceeded 500,000 yuan per hour.

2. Accurate positioning of problems SQL

1. Enable slow query log

-- Dynamically turn on record(Restart failed)
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1;  -- unit:Second
SET GLOBAL log_queries_not_using_indexes = 'ON';

-- Permanently effective configuration()
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/
long_query_time = 1
log_queries_not_using_indexes = 1

2. Diagnostic Golden Three-piece Set

Interpretation of EXPLAIN execution plan:

EXPLAIN SELECT o.order_id,  
FROM orders o
JOIN customers c ON o.cust_id = 
WHERE  = 'PAID'
  AND o.create_time > '2023-01-01';

-- Interpretation of key indicators
/*
+----+-------------+-------+------+---------------+---------+---------+-------------------+--------+-------------+
| id | select_type | table | type | possible_keys | key     | key_len | ref               | rows   | Extra       |
+----+-------------+-------+------+---------------+---------+---------+-------------------+--------+-------------+
| 1  | SIMPLE      | o     | ref  | idx_status    | idx_status | 82     | const             | 156892 | Using where |
| 1  | SIMPLE      | c     | eq_ref| PRIMARY       | PRIMARY  | 4       | .cust_id      | 1      | NULL        |
+----+-------------+-------+------+---------------+---------+---------+-------------------+--------+-------------+
*/

SHOW PROFILE In-depth Analysis:

SET profiling = 1;
-- Execute the targetSQL
SELECT /*+ Test SQL */ ...;
SHOW PROFILES;
SHOW PROFILE CPU, BLOCK IO FOR QUERY 7;

/* Typical problem output
 +----------------------+----------+----------+------------+
 | Status | Duration | CPU_user | Block_ops |
 +----------------------+----------+----------+------------+
 | starting | 0.000065 | 0.000000 | 0 |
 | checking permissions | 0.000007 | 0.000000 | 0 |
 | Opening tables | 0.000023 | 0.000000 | 0 |
 | Sorting result | 2.134567 | 1.982342 | 1245 | <-- Sorting takes a lot of time
 | Sending data | 0.000045 | 0.000000 | 0 |
 +----------------------+----------+----------+------------+
 */

Performance Schema Monitoring:

-- View the most resource-intensiveSQL
SELECT sql_text, 
       SUM_TIMER_WAIT/1e12 AS total_sec,
       SUM_ROWS_EXAMINED
FROM performance_schema.events_statements_summary_by_digest
WHERE digest_text LIKE 'SELECT%'
ORDER BY SUM_TIMER_WAIT DESC
LIMIT 5;

Third and six core optimization solutions

Solution 1: Index Optimization Strategy

Creation principles:

  • Union indexes follow WHERE > ORDER BY > GROUP BY order
  • VARCHAR field indexed using prefix: INDEX (name(20))
  • Use overwrite index to avoid backing tables

7 scenarios where index failure:

-- 1. Implicit type conversion
SELECT * FROM users WHERE phone = 13800138000; -- phoneyesvarchartype

-- 2. Index columns participate in operation
SELECT * FROM logs WHERE YEAR(create_time) = 2023;

-- 3. Front Wildcard Query
SELECT * FROM products WHERE name LIKE '%Pro%';

-- 4. ORConditional mixed use
SELECT * FROM orders WHERE status = 'PAID' OR amount &gt; 1000;

-- 5. Violate the principle of leftmost prefix
INDEX idx_a_b_c (a,b,c)
WHERE b=1 AND c=2  -- Unable to use index

-- 6. Use negative conditions
SELECT * FROM users WHERE status != 'ACTIVE';

-- 7. Index column usage function
SELECT * FROM orders WHERE UPPER(order_no) = 'ABC123';

Solution 2: SQL statement reconstruction skills

Pagination query optimization:

-- Original writing(scanning100100OK)
SELECT * FROM orders 
ORDER BY id 
LIMIT 100000, 100;

-- Optimized writing method(scanning100OK)
SELECT * FROM orders 
WHERE id &gt; 100000 
ORDER BY id 
LIMIT 100;

Connection query optimization:

-- Inefficient nested query
SELECT * FROM users 
WHERE id IN (
    SELECT user_id FROM orders 
    WHERE amount &gt; 1000
);
-- Optimized toJOIN
SELECT u.* 
FROM users u
JOIN orders o ON  = o.user_id 
WHERE  &gt; 1000;

Program 3: Implementation Plan Intervention

Forced index use:

SELECT * FROM orders 
FORCE INDEX(idx_status_create_time) 
WHERE status = 'SHIPPED' 
  AND create_time > '2023-06-01';

Optimizer tips:

SELECT /*+ MAX_EXECUTION_TIME(1000) */ ...
FROM large_table
WHERE ...;

SELECT /*+ MRR(buf_size=16M) */ ...
FROM sales 
WHERE sale_date BETWEEN ...;

4. Advanced tuning methods

1. Parameter-level optimization

# InnoDB configuration optimizationinnodb_buffer_pool_size = Physical memory70-80%
innodb_flush_log_at_trx_commit = 2  # Non-critical businessinnodb_io_capacity = 2000          # SSD configuration
# Query cache optimizationquery_cache_type = 0               #8.0+ version removed

2. Architecture-level optimization

Read and write separation architecture:

Application layer -> Middleware -> Main library (write)
-> From Library 1 (read)
-> From Library 2 (read)

Database and table division strategy:

  • Horizontal split: Table orders_2023q1 by time range
  • Vertical split: Separate user_basic from user_extra
  • Consistent hash: User ID module collection library

5. Classic practical cases

Case 1: Optimization of data query in billions

Raw SQL:

SELECT COUNT(*) 
FROM user_behavior 
WHERE create_time BETWEEN '2023-01-01' AND '2023-06-30';
-- Execution time:12.8Second

-- Optimization steps:
1. Create a function index:ALTER TABLE ADD INDEX idx_ymd ((DATE_FORMAT(create_time,'%Y%m%d')))
2. Summary after batch statistics:
   SELECT SUM(cnt) FROM (
     SELECT COUNT(*) cnt FROM user_behavior_202301
     UNION ALL
     SELECT COUNT(*) FROM user_behavior_202302
     ...
   ) tmp;
-- Optimized time:0.9Second

Case 2: Complex aggregation query optimization

Original statement:

SELECT product_id, 
       AVG(rating),
       COUNT(DISTINCT user_id)
FROM reviews
GROUP BY product_id
HAVING COUNT(*) &gt; 100;
-- Execution time:7.2Second

-- Optimization solution:
1. Create a summary table:
   CREATE TABLE product_stats (
     product_id INT PRIMARY KEY,
     total_reviews INT,
     avg_rating DECIMAL(3,2),
     unique_users INT
   );
2. Real-time update using triggers
-- Query time dropped to0.03Second

6. Performance trap avoidance

1. Overuse of indexes

No more than 5 single table indexes

There are no more than 3 joint index fields

Update frequently and create indexes carefully

2. Implicit conversion risk

-- The field type isVARCHAR(32)
SELECT * FROM devices WHERE imei = 123456789012345; -- Full table scan
SELECT * FROM devices WHERE imei = '123456789012345'; -- Go to index

3. Transaction misuse

-- Wrong long transaction
BEGIN;
SELECT * FROM products; -- Time-consuming query
UPDATE inventory SET ...; 
COMMIT;

-- Optimized to:
START TRANSACTION READ ONLY;
SELECT * FROM products;
COMMIT;

BEGIN;
UPDATE inventory SET ...;
COMMIT;

7. Future optimization trends

  • AI-assisted optimization: an index recommendation system based on machine learning
  • Adaptive query optimization: Histogram statistics for MySQL 8.0
  • Cloud native optimization: intelligent parameter adjustment of cloud databases such as Aurora
  • Hardware-level acceleration: Application of PMEM persistent memory

Through the system's optimization practice, a financial system successfully reduced the average query time from 870ms to 68ms, and the TPS increased from 1200 to 9500. Remember: SQL optimization is not a one-time job, but a process that requires continuous monitoring and iterative improvement. When encountering performance bottlenecks, follow the golden closed loop of Positioning → Analysis → Verification → Implementation to keep your database at its best at all times!

The above is the detailed content of the complete guide to slow SQL optimization methods in MySQL. For more information about MySQL slow SQL optimization, please pay attention to my other related articles!