SoFunction
Updated on 2025-04-08

SQLite performance optimization instance sharing

The first cache database I learned about when I came into contact with iOS development was SQLite. I have always used SQLite as my backbone. I have never been exposed to reading and writing a large amount of data before, so I don’t pay much attention to performance optimization. This time, I have optimized the batch reading and writing of more data in a specific scenario, which has improved the performance by ten times.

The general application scenario is as follows:

Each time the program starts, it will pull some data from the server and update the two tables of the local database simultaneously. If it does not exist, it will be written, and if it exists, it will be updated. When there are fewer data, there are dozens of them, and there are thousands more.

Since the cached data may have asynchronous read and write simultaneously, a background synchronization queue was made, and all cached database operations were in this queue. Then I monitored that the execution of key code writing the database took time to execute. It took 30 seconds to update one thousand pieces of data to the database. The disk writes float at 1.5M/s. Although there is no main thread, this consumption is not tolerated even in the background.

The core database operation is probably like this

for 1000 : {

Select -> Update Or Insert

Select -> Update Or Insert

}

Since there are two tables involved, there will be two times. After testing, there are almost no messages in Select at one time, but Update or Insert ( [FMDatabaseQueue executeUpdate:] ) consumes a lot because it will be written to disk. Then I think about whether all SQL statements can be spliced ​​together, and only once in the end. Later, I thought that SQLite does not have transactions (Transaction), so I tried the transaction operations using FMDB. Before the loop starts [db beginTransaction], the loop ends [db commit], and just wrap it up.

The approximate logic after adding transactions:

beginTransaction

for 1000 : {

Select -> Update Or Insert

Select -> Update Or Insert

}

commit

The test effect was very good, the entire time dropped from 30 seconds to about 2.8 seconds, with only two lines of code added.

Summarize:

The pits and obstacles that have been stepped on are all future experiences

Although using transaction tricks to improve performance, it is actually not safe to do so. Fortunately, the absolute consistent requirements for this part of the data are not too high.
Sometimes the simulator and real machine test cannot reproduce the same problem, because the architecture, CPU, and hard disk are different, so the performance test is best based on the real machine. When testing this problem, there were no problems on the simulator. Because the hard disk reads and writes faster than the real machine, many problems were avoided and no problems were found during the test.
When designing databases, you need to think more about how to expand, how to upgrade, and how to perform when reading and writing.