1. Overview of database reading and writing separation
In large application systems, as the number of visits increases, databases often become a performance bottleneck in the system. In order to improve the read and write performance and availability of the system, read and write separation is a classic database architecture model. It routes database read operations and write operations to different database instances separately, usually pointing write operations to the master library (Master) and read operations to the slave library (Slave).
The main advantages of reading and writing separation:
- Spread database access pressure and improve overall system throughput
- Improve the performance and concurrency of read operations
- Enhance system availability and fault tolerance
In SpringBoot applications, there are many ways to implement database read and write separation. This article will introduce three main implementation solutions.
2. Solution 1: Implement dynamic data source based on AbstractRoutingDataSource
This solution is based on SpringAbstractRoutingDataSource
Abstract class by rewritingdetermineCurrentLookupKey()
Method to realize dynamic switching of data sources.
2.1 Implementation Principle
AbstractRoutingDataSource
The core principle is to dynamically select the actual data source when executing database operations according to certain policies (usually based on the context of the current operation). By setting context identifiers in the service layer or AOP interceptor, the system automatically determines whether it is a read operation or a write operation, thereby selecting the corresponding data source.
2.2 Specific implementation steps
Step 1: Define the data source enumeration and context holder
// Enumeration of data source typespublic enum DataSourceType { MASTER, // Main library, used for writing operations SLAVE //Slave library, used for reading operations} // Data source context holderpublic class DataSourceContextHolder { private static final ThreadLocal<DataSourceType> contextHolder = new ThreadLocal<>(); public static void setDataSourceType(DataSourceType dataSourceType) { (dataSourceType); } public static DataSourceType getDataSourceType() { return () == null ? : (); } public static void clearDataSourceType() { (); } }
Step 2: Implement dynamic data sources
public class DynamicDataSource extends AbstractRoutingDataSource { @Override protected Object determineCurrentLookupKey() { return (); } }
Step 3: Configure the data source
@Configuration public class DataSourceConfig { @Bean @ConfigurationProperties(prefix = "") public DataSource masterDataSource() { return ().build(); } @Bean @ConfigurationProperties(prefix = "") public DataSource slaveDataSource() { return ().build(); } @Bean public DataSource dynamicDataSource() { DynamicDataSource dynamicDataSource = new DynamicDataSource(); Map<Object, Object> dataSourceMap = new HashMap<>(2); (, masterDataSource()); (, slaveDataSource()); // Set the default data source as the main library (masterDataSource()); (dataSourceMap); return dynamicDataSource; } @Bean public SqlSessionFactory sqlSessionFactory() throws Exception { SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean(); (dynamicDataSource()); // Set MyBatis configuration // ... return (); } }
Step 4: Implement the AOP interceptor and automatically switch the data source according to the method matching rules
@Aspect @Component public class DataSourceAspect { // Match all methods starting with select, query, get, find as read operations @Pointcut("execution(* .*.*(..))") public void servicePointcut() {} @Before("servicePointcut()") public void switchDataSource(JoinPoint point) { // Get the method name String methodName = ().getName(); //Judge whether it is a read operation or a write operation based on the method name if (("select") || ("query") || ("get") || ("find")) { // Read operation uses slave library (); } else { // Use the main library to write operations (); } } @After("servicePointcut()") public void restoreDataSource() { // Clear the data source configuration (); } }
Step 5: Configuration File
spring: datasource: master: jdbc-url: jdbc:mysql://master-db:3306/test?useSSL=false username: root password: root driver-class-name: slave: jdbc-url: jdbc:mysql://slave-db:3306/test?useSSL=false username: root password: root driver-class-name:
Step 6: Use annotation method to flexibly control the data source (optional enhancement)
// Define custom annotations@Target({, }) @Retention() @Documented public @interface DataSource { DataSourceType value() default ; } // Modify the AOP interceptor and give priority to using the data source specified by the annotation@Aspect @Component public class DataSourceAspect { @Pointcut("@annotation()") public void dataSourcePointcut() {} @Before("dataSourcePointcut()") public void switchDataSource(JoinPoint point) { MethodSignature signature = (MethodSignature) (); Method method = (); DataSource dataSource = (); if (dataSource != null) { (()); } } @After("dataSourcePointcut()") public void restoreDataSource() { (); } } // Use on Service method@Service public class UserServiceImpl implements UserService { @Override @DataSource() public List<User> findAllUsers() { return (); } @Override @DataSource() public void createUser(User user) { (user); } }
2.3 Pros and cons analysis
advantage:
- Simple implementation, no dependency on third-party components
- Less intrusiveness and less impact on business code
- High flexibility, and can flexibly switch data sources according to business needs
- Supports multi-data source extension, not limited to two master and slave libraries
shortcoming:
- Read and write operations need to be manually specified or approved by convention rules
Applicable scenarios:
- Small and medium-sized projects, clear separation of read and write requests
- Scenarios with low dependency requirements for middleware
- Temporary performance optimization, rapid read and write separation
3. Solution 2: Read and write separation based on ShardingSphere-JDBC
ShardingSphere-JDBC is a sub-project under the Apache ShardingSphere project. It provides applications with transparent read and write separation, library and table division and other functions through client sharding.
3.1 Implementation Principle
ShardingSphere-JDBC realizes read and write separation by intercepting JDBC drivers and rewriting SQL parsing and execution processes. It can automatically judge read and write operations based on SQL semantics and distribute read operations to multiple slave libraries load-balanced.
3.2 Specific implementation steps
Step 1: Add dependencies
<dependency> <groupId></groupId> <artifactId>shardingsphere-jdbc-core-spring-boot-starter</artifactId> <version>5.2.1</version> </dependency> <dependency> <groupId></groupId> <artifactId>mysql-connector-j</artifactId> <scope>runtime</scope> </dependency>
Step 2: Configuration file
spring: shardingsphere: mode: type: Memory datasource: names: master,slave1,slave2 master: type: driver-class-name: jdbc-url: jdbc:mysql://master-db:3306/test?useSSL=false username: root password: root slave1: type: driver-class-name: jdbc-url: jdbc:mysql://slave1-db:3306/test?useSSL=false username: root password: root slave2: type: driver-class-name: jdbc-url: jdbc:mysql://slave2-db:3306/test?useSSL=false username: root password: root rules: readwrite-splitting: data-sources: readwrite_ds: type: Static props: write-data-source-name: master read-data-source-names: slave1,slave2 load-balancer-name: round_robin load-balancers: round_robin: type: ROUND_ROBIN props: sql-show: true # Turn on SQL display for easy debugging
Step 3: Create a data source configuration class
@Configuration public class DataSourceConfig { // No additional configuration is required, ShardingSphere-JDBC will automatically create and register DataSource @Bean @ConfigurationProperties(prefix = "mybatis") public SqlSessionFactoryBean sqlSessionFactoryBean(DataSource dataSource) { SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean(); (dataSource); return sqlSessionFactoryBean; } }
Step 4: Force the annotation of the main library query (optional)
In some scenarios, even query operations require reading the latest data from the main library. ShardingSphere provides a hint mechanism to achieve this requirement.
// Define the main library query annotation@Target({}) @Retention() @Documented public @interface MasterRoute { } // Create an AOP section interceptor@Aspect @Component public class MasterRouteAspect { @Around("@annotation()") public Object aroundMasterRoute(ProceedingJoinPoint joinPoint) throws Throwable { try { ().setWriteRouteOnly(); return (); } finally { (); } } } // Use annotations on methods that require main library query@Service public class OrderServiceImpl implements OrderService { @Autowired private OrderMapper orderMapper; @Override @MasterRoute public Order getLatestOrder(Long userId) { // The query here will be routed to the main library return (userId); } }
3.3 Pros and cons analysis
advantage:
- Automatically identify SQL types without manually specifying read and write data sources
- Supports multi-slave library load balancing
- Provide rich load balancing algorithms (polling, random, weighting, etc.)
- Complete library and table division capabilities, seamless expansion
- Transparent to applications, no modification of business code
shortcoming:
- Introduce additional dependencies and learning costs
- Relatively complex configuration
- Slight performance loss (SQL parsing and routing)
Applicable scenarios:
- Medium and large-scale projects have clear reading and writing separation requirements
- Requires load balancing to multiple slave libraries
- A system that may require a library and table division in the future
Fourth, solution 3: Read and write separation based on MyBatis plug-in
MyBatis provides a powerful plug-in mechanism that allows interception and processing at different stages of SQL execution. Through custom plug-ins, read and write separation functions based on SQL parsing can be implemented.
4.1 Implementation Principle
MyBatis allows intercepting executorsquery
andupdate
Methods, by intercepting these methods, you can dynamically switch data sources before SQL execution. The core of this approach is to write an interceptor, analyze the SQL statement type (SELECT/INSERT/UPDATE/DELETE) to be executed, and then switch to the corresponding data source according to the SQL type.
4.2 Specific implementation steps
Step 1: Define the data source and context (similar to Scheme 1)
public enum DataSourceType { MASTER, SLAVE } public class DataSourceContextHolder { private static final ThreadLocal<DataSourceType> contextHolder = new ThreadLocal<>(); public static void setDataSourceType(DataSourceType dataSourceType) { (dataSourceType); } public static DataSourceType getDataSourceType() { return () == null ? : (); } public static void clearDataSourceType() { (); } } public class DynamicDataSource extends AbstractRoutingDataSource { @Override protected Object determineCurrentLookupKey() { return (); } }
Step 2: Implement MyBatis Interceptor
@Intercepts({ @Signature(type = , method = "query", args = {, , , }), @Signature(type = , method = "query", args = {, , , , , }), @Signature(type = , method = "update", args = {, }) }) @Component public class ReadWriteSplittingInterceptor implements Interceptor { @Override public Object intercept(Invocation invocation) throws Throwable { Object[] args = (); MappedStatement ms = (MappedStatement) args[0]; try { // Determine whether it is a transaction boolean isTransactional = (); // If it is a transaction, use the main library if (isTransactional) { (); return (); } // Select data source according to SQL type if (() == ) { // Read operation uses slave library (); } else { // Use the main library to write operations (); } return (); } finally { // Clear the data source configuration (); } } @Override public Object plugin(Object target) { if (target instanceof Executor) { return (target, this); } return target; } @Override public void setProperties(Properties properties) { // Properties can be loaded from configuration files } }
Step 3: Configure the data source and the MyBatis plugin
@Configuration public class DataSourceConfig { @Bean @ConfigurationProperties(prefix = "") public DataSource masterDataSource() { return ().build(); } @Bean @ConfigurationProperties(prefix = "") public DataSource slaveDataSource() { return ().build(); } @Bean public DataSource dynamicDataSource() { DynamicDataSource dynamicDataSource = new DynamicDataSource(); Map<Object, Object> dataSourceMap = new HashMap<>(2); (, masterDataSource()); (, slaveDataSource()); (masterDataSource()); (dataSourceMap); return dynamicDataSource; } @Bean public SqlSessionFactory sqlSessionFactory(@Autowired ReadWriteSplittingInterceptor interceptor) throws Exception { SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean(); (dynamicDataSource()); // Add MyBatis plugin (new Interceptor[]{interceptor}); // Other MyBatis configurations // ... return (); } }
Step 4: Force the main library to query the annotations (optional)
@Configuration public class DataSourceConfig { @Bean @ConfigurationProperties(prefix = "") public DataSource masterDataSource() { return ().build(); } @Bean @ConfigurationProperties(prefix = "") public DataSource slaveDataSource() { return ().build(); } @Bean public DataSource dynamicDataSource() { DynamicDataSource dynamicDataSource = new DynamicDataSource(); Map<Object, Object> dataSourceMap = new HashMap<>(2); (, masterDataSource()); (, slaveDataSource()); (masterDataSource()); (dataSourceMap); return dynamicDataSource; } @Bean public SqlSessionFactory sqlSessionFactory(@Autowired ReadWriteSplittingInterceptor interceptor) throws Exception { SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean(); (dynamicDataSource()); // Add MyBatis plugin (new Interceptor[]{interceptor}); // Other MyBatis configurations // ... return (); } }
4.3 Pros and cons analysis
advantage:
- Automatically identify SQL types without manually specifying data sources
- Flexible scalability and support complex routing rules
- Based on the MyBatis native plug-in mechanism, no need to introduce additional middleware
shortcoming:
- Only available for projects using MyBatis
- Need to understand the MyBatis plugin mechanism
- No built-in load balancing capability, requiring additional development
- Possible conflict with other MyBatis plugins
- Transaction management is more complex
Applicable scenarios:
- Pure MyBatis project
- Scenarios with more customization requirements
- Projects with restrictions on third-party middleware
- Scenarios where more fine control over read and write separation are needed
Five, Three Solutions Comparison and Selection Guide
5.1 Function comparison
Functional Features | Solution 1: AbstractRoutingDataSource | Solution 2: ShardingSphere-JDBC | Solution 3: MyBatis plugin |
---|---|---|---|
Automatically identify SQL types | ❌ Requires manual or through rules to specify | ✅ Automatic recognition | ✅ Automatic recognition |
Multi-slave library load balancing | ❌ Need to implement it yourself | ✅ Built-in multiple algorithms | ❌ Need to implement it yourself |
Integrate with library and table | ❌ Not supported | ✅ Native support | ❌ Additional development is required |
Development complexity | ⭐⭐ Medium | ⭐ Lower | ⭐⭐⭐ Higher |
Configuration complexity | ⭐ Lower | ⭐⭐⭐ Higher | ⭐⭐ Medium |
5.2 Selection suggestions
The situation of selecting solution 1 (AbstractRoutingDataSource):
- The project scale is small, and the rules for separation of reading and writing are simple and clear
- Sensitive to third-party dependence and hope to reduce dependence
- The team is familiar with Spring native mechanism
- The system is in an early stage and may change frequently
The situation of choosing Scheme 2 (ShardingSphere-JDBC):
- Medium and large-scale projects with complex database access requirements
- Need to have multiple load balancing capabilities for the library
- In the future, it may be necessary to divide the database and table
- Hope to minimize code intrusion
- High requirements for development efficiency
Selection Plan Three (MyBatis plugin):
- The project is entirely based on the MyBatis architecture
- The team is familiar with the MyBatis plugin mechanism
- There are specific customization requirements
- Hope to have finer granular control over SQL routing
- Strict restrictions on framework dependencies
6. Best practices for implementing reading and writing separation
6.1 Data consistency processing
There is a delay in syncing data from the library, which can lead to problems with reading expired data. How to deal with it:
- Provides options to force master library query: For queries that require the latest data, provide a mechanism for reading from the main library
- Session consistency: Read and write operations within the same session use the same data source
- Delay detection: Periodically detect master-slave synchronization delays, and pause slave database query when the delay exceeds the threshold.
// Example of implementation of delay detection@Component @Slf4j public class ReplicationLagMonitor { @Autowired private JdbcTemplate masterJdbcTemplate; @Autowired private JdbcTemplate slaveJdbcTemplate; private AtomicBoolean slaveTooLagged = new AtomicBoolean(false); @Scheduled(fixedRate = 5000) // Check every 5 seconds public void checkReplicationLag() { try { // Write marks in the main library String mark = ().toString(); ("INSERT INTO replication_marker(marker, create_time) VALUES(?, NOW())", mark); // Wait for a certain time to give the opportunity to synchronize from the library (1000); // Query the tag from the library Integer count = ( "SELECT COUNT(*) FROM replication_marker WHERE marker = ?", , mark); //Judge synchronization delay boolean lagged = (count == null || count == 0); (lagged); if (lagged) { ("Slave replication lag detected, routing read operations to master"); } else { ("Slave replication is in sync"); } } catch (Exception e) { ("Failed to check replication lag", e); (true); // When an exception occurs, it is conservatively believed that the delay of the library is too large. } finally{ // Delete tag data ("DELETE FROM replication_marker WHERE marker = ?", mark); } } public boolean isSlaveTooLagged() { return (); } }
6.2 Transaction Management
Special attention should be paid to transaction processing in the read-write separation environment:
- All operations within transactions are in the main library: Ensure transaction consistency
- Avoid long-term business: Long-term affairs will lock the main library resources for a long time
- Distinguish read-only transactions: For read-only transactions, you can consider routing to the slave library
6.4 Monitoring and performance optimization
- Monitor read and write ratio: Understand the system's read and write ratio and optimize resource allocation
- Slow query monitoring: Monitor slow query of each data source
- Connection pool optimization: Adjust the connection pool parameters according to the actual load
# HikariCP connection pool configuration examplespring: datasource: master: # The main library tends to write operations, and the connection pool can be appropriately smaller. maximum-pool-size: 20 minimum-idle: 5 slave: # From the library to read operation, the connection pool can be appropriately larger maximum-pool-size: 50 minimum-idle: 10
7. Summary
When implementing read and write separation, special attention should be paid to issues such as data consistency, transaction management and fault handling.
Through reasonable architectural design and meticulous implementation, read and write separation can effectively improve the read and write performance and scalability of the system, providing strong support for the high availability and high performance of the application system.
Regardless of the option you choose, remember that read-write separation is an architectural pattern, not a panacea to solve all performance problems. Before implementation, the actual needs and potential risks of the system should be fully evaluated to ensure that the benefits outweigh the costs.
This is the article about the three methods of implementing database reading and writing separation in SpringBoot. This is the end. For more related contents of SpringBoot database reading and writing separation, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!