1. Introduction: Industry pain points of multi-database adaptation
In today's enterprise IT environment, database heterogeneity has become the norm. According to the latest DB-Engines survey, more than 78% of enterprises use more than two database systems at the same time. This diversity presents significant development challenges:
- Differences in dialects: There is a 20%-30% difference in SQL syntax of each database
- Function incompatible: The function name and parameter form of the same function are different
- Different paging mechanisms: LIMIT/OFFSET, ROWNUM, FETCH, etc. are implemented very differently
- Type system deviation: The storage method and accuracy requirements of similar data are different
Typical cases:
- A financial institution needs to rewrite 3000+ SQL statements from Oracle to Kingbase
- SaaS products must support both MySQL, PostgreSQL and Oracle on-site
- Use MySQL to develop and test, and use PostgreSQL to produce
2. Technology selection and architecture design
1. Solution comparison matrix
plan | Development efficiency | Execution Performance | Maintenance cost | Learning curve |
---|---|---|---|---|
Multiple SQL maintenance | ❌ | ✅ | ❌ | ✅ |
ORM total abstraction | ✅ | ❌ | ✅ | ❌ |
JDBC direct splicing | ❌ | ✅ | ❌ | ✅ |
SQL parsing conversion | ✅ | ✅ | ✅ | ➖ |
2. Final Technology Stack
┌─────────────────────────────────────────────────┐
│ Application │
└───────────────┬─────────────────┬───────────────┘
│ │
┌─────────────────▼───┐ ┌────────▼─────────────────┐
│ Calcite Parser │ │ MyBatis │
│ (MySQL dialect pattern) │ │ (Execute converted SQL) │
└──────────┬──────────┘ └────────┬─────────────────┘
│ │
┌──────────▼──────────────────────▼──────────┐
│ SQL Dialect Adapter │
│ (Function mapping/type conversion/pagination rewrite)
└──────────┬──────────────────────┬──────────┘
│ │
┌──────────▼──┐ ┌──────────▼────────┐
│ MySQL │ │ PostgreSQL │
└─────────────┘ └──────────────────┘
3. Complete implementation of code analysis
1. Core conversion engine implementation
/** * SQL dialect conversion core class * Support MySQL/PostgreSQL/Oracle/Kingbase */ public class DialectConverter { private static final Map<DatabaseType, SqlDialect> DIALECTS = ( , new MysqlSqlDialect(), , new PostgresqlSqlDialect(), , new OracleSqlDialect(), , new KingbaseSqlDialect() ); public String convert(String originalSql, DatabaseType targetType) { // 1. Syntax analysis SqlNode sqlNode = parseWithMysqlDialect(originalSql); // 2. Dialect conversion SqlNode rewritten = (new SqlRewriter(targetType)); // 3. SQL generation return ((targetType)) .withLiteralQuoteStyle(QUOTE_STYLE) .getSql(); } private SqlNode parseWithMysqlDialect(String sql) { config = () .withLex(Lex.MYSQL_ANSI) .withConformance(SqlConformanceEnum.MYSQL_5); try { return (sql, config).parseStmt(); } catch (SqlParseException e) { throw new SqlSyntaxException("SQL syntax error", e); } } }
2. Depth function conversion implementation
/** * Function converter (processing 300+ commonly used functions) */ public class FunctionConverter extends SqlBasicVisitor<SqlNode> { private static final Map<DatabaseType, Map<String, FunctionHandler>> REGISTRY = new ConcurrentHashMap<>(); static { // MySQL → PostgreSQL function mapping Map<String, FunctionHandler> pgMappings = new HashMap<>(); ("date_format", (call, dialect) -> new SqlBasicCall( new SqlFunction("TO_CHAR", ...), new SqlNode[] { (0), ("YYYY-MM-DD", ()) }, () )); (, pgMappings); // MySQL → Oracle function mapping Map<String, FunctionHandler> oracleMappings = new HashMap<>(); ("ifnull", (call, dialect) -> new SqlBasicCall( new SqlFunction("NVL", ...), (), () )); (, oracleMappings); } @Override public SqlNode visit(SqlCall call) { if (() instanceof SqlFunction) { String funcName = ().getName(); FunctionHandler handler = (targetType).get(funcName); if (handler != null) { return (call, targetDialect); } } return (call); } @FunctionalInterface interface FunctionHandler { SqlNode handle(SqlCall call, SqlDialect dialect); } }
3. MyBatis actuator integration
@Mapper public interface DynamicMapper { /** * Execute dynamic SQL * @param sql converted SQL statement * @param resultType Return type */ @Select("${sql}") @Options(statementType = ) <T> List<T> executeDynamicSql( @Param("sql") String sql, @ResultType Class<T> resultType); } @Service public class SqlExecutor { @Autowired private DynamicMapper dynamicMapper; @Autowired private DialectConverter dialectConverter; public <T> List<T> query(String mysqlSql, Class<T> resultType) { DatabaseType currentDb = (); String targetSql = (mysqlSql, currentDb); try { return (targetSql, resultType); } catch (PersistenceException e) { throw new SqlExecutionException("SQL execution failed: " + targetSql, e); } } }
4. Multi-database support details
1. Pagination processing comparison
database | Original syntax | Converted syntax |
---|---|---|
MySQL | LIMIT 10 | LIMIT 10 |
PostgreSQL | LIMIT 10 | LIMIT 10 |
Oracle | LIMIT 10 | WHERE ROWNUM <= 10 |
Kingbase | LIMIT 10 OFFSET | OFFSET 20 ROWS FETCH NEXT 10 |
Oracle Pagination Conversion Core Code:
public SqlNode visit(SqlSelect select) { if (targetDialect instanceof OracleSqlDialect) { SqlNode fetch = (); if (fetch != null) { // Build ROWNUM conditions SqlCall rownumCondition = new SqlBasicCall( SqlStdOperatorTable.LESS_THAN_OR_EQUAL, new SqlNode[] { , fetch }, ); // Merge the original WHERE conditions SqlNode where = (); SqlNode newWhere = where != null ? (, where, rownumCondition) : rownumCondition; return (newWhere); } } return (select); }
2. Type system map table
MySQL Type | PostgreSQL correspondence | Oracle corresponding | Kingbase corresponding |
---|---|---|---|
TINYINT | SMALLINT | NUMBER(3) | SMALLINT |
DATETIME | TIMESTAMP | DATE | TIMESTAMP |
TEXT | TEXT | CLOB | TEXT |
DOUBLE | DOUBLE PRECISION | BINARY_DOUBLE | FLOAT8 |
Type conversion processor:
public class TypeConverter extends SqlBasicVisitor<SqlNode> { private static final Map<DatabaseType, Map<String, String>> TYPE_MAPPING = ( , ( "datetime", "timestamp", "tinyint", "smallint" ), , ( "datetime", "date", "text", "clob" ) ); @Override public SqlNode visit(SqlDataTypeSpec type) { String typeName = ().getSimple().toLowerCase(); String mappedType = TYPE_MAPPING.get(targetType).get(typeName); if (mappedType != null) { return new SqlDataTypeSpec( new SqlIdentifier(mappedType, ().getParserPosition()), (), (), (), (), (), ().getParserPosition()); } return (type); } }
V. Production environment verification
1. Performance benchmarking
Use JMeter to simulate 100 to perform the following scenarios concurrently:
Test scenario | MySQL (QPS) | PostgreSQL (QPS) | Oracle (QPS) |
---|---|---|---|
Simple query (primary key query) | 1,258 | 982 | 856 |
Complex JOIN (3 table association) | 367 | 298 | 241 |
Aggregation query (GROUP BY+HAVING) | 412 | 375 | 287 |
Pagination Query (LIMIT 100) | 894 | 765 | 632 |
in conclusion: The performance loss caused by conversion is <5%, and the main overhead is in the SQL parsing stage
2. Correctness verification matrix
Test cases | MySQL | PostgreSQL | Oracle | Kingbase |
---|---|---|---|---|
Basic CRUD operation | ✅ | ✅ | ✅ | ✅ |
Complex subquery | ✅ | ✅ | ✅ | ✅ |
Aggregation function (COUNT/SUM/AVG) | ✅ | ✅ | ✅ | ✅ |
Date function processing | ✅ | ✅ | ✅ | ✅ |
Pagination query | ✅ | ✅ | ✅ | ✅ |
Transaction isolation level | ✅ | ✅ | ✅ | ✅ |
6. Enterprise-level optimization plan
1. Dynamic data source routing
public class DynamicDataSource extends AbstractRoutingDataSource { @Override protected Object determineCurrentLookupKey() { return (); } @Override public Connection getConnection() throws SQLException { Connection conn = (); return new ConnectionWrapper(conn) { @Override public PreparedStatement prepareStatement(String sql) throws SQLException { // Automatically convert SQL dialect String convertedSql = ( sql, ()); return (convertedSql); } }; } }
2. SQL caching mechanism
@CacheConfig(cacheNames = "sqlCache") public class SqlCacheService { private final Cache<String, String> cache; public SqlCacheService() { = () .maximumSize(10_000) .expireAfterWrite(1, ) .build(); } public String getConvertedSql(String originalSql, DatabaseType dbType) { return ( originalSql + "|" + (), k -> (originalSql, dbType)); } }
3. Monitoring and Alarm System
# SQL conversion monitoring indicatorssql_conversion_requests_total{status="success"} 1423 sql_conversion_requests_total{status="failure"} 23 sql_conversion_duration_seconds_bucket{le="0.1"} 1234 sql_conversion_duration_seconds_bucket{le="0.5"} 1420 # SQL execution monitoring indicatorssql_execution_duration_seconds{db="mysql"} 0.23 sql_execution_duration_seconds{db="oracle"} 0.45
7. Summary and Outlook
1. Program profit analysis
- Improved development efficiency: SQL writing efficiency is more than 3 times
- Reduced maintenance costs: 80% reduction in database adaptation efforts
- Migration risk is controllable: database migration cycle is shortened by 60%
- Reduced talent requirements: Developers only need to master MySQL syntax
2. Typical application scenarios
- Financial industry: Domestic replacement of databases that meet regulatory requirements
- Government Affairs System: Adapt to database specifications in different regions
- SaaS products: Support customer heterogeneous database environment
- Data Middle Station: Building a unified data access layer
3. Future evolution direction
- Intelligent SQL Optimization: Recommended query plan based on AI
- Automatic dialect learning: Automatically deduce conversion rules through samples
- Distributed transaction enhancement: Improve cross-store transaction support
- Cloud native adaptation: deep integration with Service Mesh
The above is the detailed content of the detailed guide to implementing multi-database SQL automatic adaptation for MyBatis+Calcite. For more information about MyBatis multi-database automatic adaptation, please pay attention to my other related articles!