SoFunction
Updated on 2025-04-14

C# Code examples for efficient processing of large data using SQLite

Preface

In software development, efficient processing of large data volumes is a common and challenging task. SQLite has become the first database of choice for many developers due to its zero configuration, embedded and cross-platform features.

As a lightweight, serverless relational database, SQLite provides powerful data processing capabilities in C#.

This article will explore in-depth how to use SQLite to optimize the storage and retrieval of large data volumes.

Preparation

First install the core components through NuGet:

//Use SQLite database operationusing ;

Data Entity

Define a concise log entity class:

public class DeviceLog
{
    public int Id { get; set; }
    public string DeviceId { get; set; }
    public DateTime LogTime { get; set; }
    public double Value { get; set; }
    // Add commonly used query fields for indexes    [Indexed] 
    public string LogType { get; set; }
}

Core Technology

Batch insertion: The transformation from turtle to cheetah

Error demonstration

Insert 100,000 pieces of data one by one ≈ 2 minutes

// How to write as slow as a snailforeach (var log in logs)
{
     = $"INSERT INTO Logs VALUES ('{}', {})";
    ();
}

Performance leap solution: Transaction + Parameterized batch insertion

// Lightning insertion (100,000 pieces/3 seconds)using (var transaction = ())
{
    var command = new SQLiteCommand(
        "INSERT INTO Logs (DeviceId, Value) VALUES (@devId, @val)", 
        connection, 
        transaction);
        
    var param1 = new SQLiteParameter("@devId");
    var param2 = new SQLiteParameter("@val");
    (param1);
    (param2);

    foreach (var log in logs)
    {
         = ;
         = ;
        ();
    }
    ();
}

Pagination query: Loading millions of data

public List<DeviceLog> GetPagedLogs(int pageIndex, int pageSize)
{
    return <DeviceLog>(@"
        SELECT * FROM Logs 
        ORDER BY LogTime DESC 
        LIMIT @PageSize 
        OFFSET @Offset",
        new { 
            PageSize = pageSize, 
            Offset = pageIndex * pageSize 
        }).ToList();
}

Skill combination:

Cooperating with WPFVirtualizingPanelEnable smooth scrolling of the interface

forLogTimeAdd descending index to accelerate sorting of fields

Asynchronous processing: Reject interface is stuck

public async Task BulkInsertAsync(List&lt;DeviceLog&gt; logs)
{
    await using var conn = new SQLiteConnection(connectionString);
    await ();
    
    // Asynchronous version of transaction operations    await using var transaction = await ();
    // ...Batch insertion logic (same as above)    await ();
}

Advanced optimization

index

Golden Rules: Only index for high-frequency query fields

The wonderful use of composite indexes:rightWHERE Type='Error' AND Time > '2024-01-01'Query, create(Type, Time)Joint index

Rebuild the index: Regular executionREINDEXCommand optimization index structure

Connection pool

// Initialize the connection pool when the app starts(
    maxPoolSize: 10, 
    idleTimeout: (5));

Library and table

Split the log table by month:

var tableName = $"Logs_{:yyyyMM}";
ExecuteNonQuery($"CREATE TABLE IF NOT EXISTS {tableName} (...);");

Pit avoidance guide

1. Concurrent write trap

Enable WAL mode to improve concurrency:PRAGMA journal_mode=WAL;

Setting Busy Wait Timeout:PRAGMA busy_timeout=5000;

2. Memory explosion warning

// Force paging during query = "SELECT * FROM BigTable LIMIT 5000"; 

3. Period maintenance is indispensable

// Perform database sorting once a monthExecuteNonQuery("VACUUM; ANALYZE;");

Performance optimization suggestions

1. Use transactions to process data in batches

2. Create an appropriate index for common queries

3. Use parameterized query

4. Paging to obtain large data sets

5. Consider using asynchronous and parallel processing

Things to note

SQLite has limited support for concurrent writes

Consider the database and table division when the large amount of data is

Regular database maintenance and vacuum operations

Summarize

Through this optimization journey, three truths have been verified:

1. Transactions are the savior of batch operations: using transactions reasonably can increase the writing speed by dozens of times

2. Index is a double-edged sword: accurate index design can make queries fly, while abuse will lead to write disasters.

3. Asynchronous is not a silver bullet: Although UI threads are free, database lock competition still needs to be handled with caution

Last share onePerformance tuning method: Always use SQLite's ownEXPLAIN QUERY PLANAnalyze the statements and then write the code to optimize it. remember:The larger the data volume, the more restrained the design

at last

The above is the detailed content of the code example of C# using SQLite for efficient processing of large data. For more information about C# SQLite's large data processing, please pay attention to my other related articles!