解决Camunda与ShardingSphere结合“Can not find owner from table.“异常

问题

在整合一个工作流项目时,使用Camunda和ShardingSphere技术。Camunda包含工作流业务完整的数据库持久化操作,在结合ShardingSphere(4.1.1)时候一些嵌套子查询语句执行发生错误。

Caused by: org.apache.ibatis.exceptions.PersistenceException:
### Error querying database.  Cause: java.lang.IllegalStateException: Can not find owner from table.
### The error may exist in org/camunda/bpm/engine/impl/mapping/entity/Statistics.xml
### The error may involve org.camunda.bpm.engine.impl.persistence.entity.ProcessDefinitionStatisticsEntity.selectProcessDefinitionStatistics-Inline
### The error occurred while setting parameters
### SQL: select distinct RES.*                from         (           select               PROCDEF.*             , INSTANCE.INSTANCE_COUNT_                                         , INC.INCIDENT_TYPE_             , INC.INCIDENT_COUNT_                                    from               ACT_RE_PROCDEF PROCDEF                       left outer join                              (                 select                     E.PROC_DEF_ID_                   , count(E.PROC_INST_ID_) as INSTANCE_COUNT_                 from                     ACT_RU_EXECUTION E                  where                     E.PARENT_ID_ IS NULL                                                                      group by                     E.PROC_DEF_ID_               ) INSTANCE           on               PROCDEF.ID_ = INSTANCE.PROC_DEF_ID_                                                           left outer join                              (                 select                     I.PROC_DEF_ID_                   , I.INCIDENT_TYPE_                   , count(I.ID_) as INCIDENT_COUNT_                 from                     ACT_RU_INCIDENT I                   WHERE  I.ID_ = I.ROOT_CAUSE_INCIDENT_ID_                  group by                     I.PROC_DEF_ID_, I.INCIDENT_TYPE_               ) INC           on               PROCDEF.ID_ = INC.PROC_DEF_ID_                            ) RES                           order by RES.ID_ asc     LIMIT ? OFFSET ?
### Cause: java.lang.IllegalStateException: Can not find owner from table.
    at org.apache.ibatis.exceptions.ExceptionFactory.wrapException(ExceptionFactory.java:30)
    at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:153)
    at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:145)
    at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:140)
    at org.camunda.bpm.engine.impl.db.sql.DbSqlSession.lambda$executeSelectList$1(DbSqlSession.java:111)
    at org.camunda.bpm.engine.impl.util.ExceptionUtil.doWithExceptionWrapper(ExceptionUtil.java:254)
    ... 176 more
Caused by: java.lang.IllegalStateException: Can not find owner from table.
    at org.apache.shardingsphere.sql.parser.binder.segment.select.projection.engine.ProjectionsContextEngine.find(ProjectionsContextEngine.java:197)
    at org.apache.shardingsphere.sql.parser.binder.segment.select.projection.engine.ProjectionsContextEngine.findShorthandProjection(ProjectionsContextEngine.java:139)
    at org.apache.shardingsphere.sql.parser.binder.segment.select.projection.engine.ProjectionsContextEngine.containsItemWithOwnerInShorthandProjections(ProjectionsContextEngine.java:135)
    at org.apache.shardingsphere.sql.parser.binder.segment.select.projection.engine.ProjectionsContextEngine.containsItemInShorthandProjection(ProjectionsContextEngine.java:121)
    at org.apache.shardingsphere.sql.parser.binder.segment.select.projection.engine.ProjectionsContextEngine.containsProjection(ProjectionsContextEngine.java:105)
    at org.apache.shardingsphere.sql.parser.binder.segment.select.projection.engine.ProjectionsContextEngine.getDerivedOrderColumns(ProjectionsContextEngine.java:96)
    at org.apache.shardingsphere.sql.parser.binder.segment.select.projection.engine.ProjectionsContextEngine.getDerivedOrderByColumns(ProjectionsContextEngine.java:88)
    at org.apache.shardingsphere.sql.parser.binder.segment.select.projection.engine.ProjectionsContextEngine.createProjectionsContext(ProjectionsContextEngine.java:71)
    at org.apache.shardingsphere.sql.parser.binder.statement.dml.SelectStatementContext.<init>(SelectStatementContext.java:99)
    at org.apache.shardingsphere.sql.parser.binder.SQLStatementContextFactory.getDMLStatementContext(SQLStatementContextFactory.java:103)
    at org.apache.shardingsphere.sql.parser.binder.SQLStatementContextFactory.newInstance(SQLStatementContextFactory.java:87)
    at org.apache.shardingsphere.underlying.route.DataNodeRouter.createRouteContext(DataNodeRouter.java:99)
    at org.apache.shardingsphere.underlying.route.DataNodeRouter.executeRoute(DataNodeRouter.java:89)
    at org.apache.shardingsphere.underlying.route.DataNodeRouter.route(DataNodeRouter.java:76)
    at org.apache.shardingsphere.underlying.pluggble.prepare.PreparedQueryPrepareEngine.route(PreparedQueryPrepareEngine.java:54)
    at org.apache.shardingsphere.underlying.pluggble.prepare.BasePrepareEngine.executeRoute(BasePrepareEngine.java:96)
    at org.apache.shardingsphere.underlying.pluggble.prepare.BasePrepareEngine.prepare(BasePrepareEngine.java:83)
    at org.apache.shardingsphere.shardingjdbc.jdbc.core.statement.ShardingPreparedStatement.prepare(ShardingPreparedStatement.java:183)
    at org.apache.shardingsphere.shardingjdbc.jdbc.core.statement.ShardingPreparedStatement.execute(ShardingPreparedStatement.java:143)
    at sun.reflect.GeneratedMethodAccessor118.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.ibatis.logging.jdbc.PreparedStatementLogger.invoke(PreparedStatementLogger.java:59)
    at com.sun.proxy.$Proxy171.execute(Unknown Source)
    at org.apache.ibatis.executor.statement.PreparedStatementHandler.query(PreparedStatementHandler.java:64)
    at org.apache.ibatis.executor.statement.RoutingStatementHandler.query(RoutingStatementHandler.java:79)
    at org.apache.ibatis.executor.BatchExecutor.doQuery(BatchExecutor.java:92)
    at org.apache.ibatis.executor.BaseExecutor.queryFromDatabase(BaseExecutor.java:325)
    at org.apache.ibatis.executor.BaseExecutor.query(BaseExecutor.java:156)
    at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:109)
    at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:89)
    at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:151)
    ... 180 more

从网上检索相关问题获知,ShardingSphere对子查询的支持有一些限制

// BiuBiuBooom commented on Oct 12 •
// 重点是外层SQL不要出现 * ,不要使用别名,需要的字段都写清楚(内外层sql都要写清楚),才可以查出数据,不然要么是数据为空,要么是报错 Can not find owner from table.
 
// 正常工作
select resId,resType from (
        select id as resId, type as resType from test
) res where resId > 0 GROUP BY resType
 
// 没有数据
select * from (
        select id as resId, type as resType from test
) res where resId > 0 GROUP BY resType
 
// 没有数据
select * from (
        select * from test
)res where id > 0 GROUP BY res.type
 
// IllegalStateException : Can not find owner from table.
select * from (
        select * from test
) res where res.id > 0 GROUP BY res.type

ShardingSphere在5以上版本貌似解决了这个问题。

解决方案

在不升级ShardingSphere的前提下,分别尝试几种方案解决问题。

方案1:试图改造ShardingSphere 4.1.1使其支持出问题的SQL语句(失败)

因为有ShardingSphere 5解决该问题的开源修改代码记录,试图改造4.1.1版本解决该问题。但是ShardingSphere的源码里各个类都加了final标识,无法进行自定义的实现继承重写出问题的类。最后放弃了该方案。

方案2:利用ShardingSphere的SPI接口修改SQL以适应ShardingSphere 执行。

ShardingSphere的SPI接口ParsingHook可以在SQL语句解析后(执行前)进行处理,利用这个接口可以将有问题的SQLStatement对象进行改造。

package com.xxx.basic.workflow.config.shardingsphere;
 
import org.antlr.v4.runtime.tree.ParseTree;
import org.apache.shardingsphere.sql.parser.core.parser.SQLParserExecutor;
import org.apache.shardingsphere.sql.parser.core.visitor.ParseTreeVisitorFactory;
import org.apache.shardingsphere.sql.parser.core.visitor.VisitorRule;
import org.apache.shardingsphere.sql.parser.hook.ParsingHook;
import org.apache.shardingsphere.sql.parser.sql.segment.dml.TableReferenceSegment;
import org.apache.shardingsphere.sql.parser.sql.segment.generic.table.SubqueryTableSegment;
import org.apache.shardingsphere.sql.parser.sql.statement.SQLStatement;
import org.apache.shardingsphere.sql.parser.sql.statement.dml.SelectStatement;
 
import java.util.Collection;
 
public class FixSubqueryParsingHook implements ParsingHook {
 
    public static final String SQL = "select distinct ID_, REV_, CATEGORY_, NAME_, KEY_, VERSION_, DEPLOYMENT_ID_, RESOURCE_NAME_, DGRM_RESOURCE_NAME_, HAS_START_FORM_KEY_, SUSPENSION_STATE_, TENANT_ID_, VERSION_TAG_, HISTORY_TTL_, STARTABLE_, INSTANCE_COUNT_, INCIDENT_TYPE_, INCIDENT_COUNT_\n" +
        "    \n" +
        "     \n" +
        "    from\n" +
        "        (\n" +
        "          select\n" +
        "              PROCDEF.*\n" +
        "            , INSTANCE.INSTANCE_COUNT_\n" +
        "             \n" +
        "             \n" +
        "            , INC.INCIDENT_TYPE_\n" +
        "            , INC.INCIDENT_COUNT_\n" +
        "             \n" +
        "          \n" +
        "          from\n" +
        "              ACT_RE_PROCDEF PROCDEF\n" +
        "\n" +
        "          \n" +
        "          left outer join\n" +
        "              \n" +
        "              (\n" +
        "                select\n" +
        "                    E.PROC_DEF_ID_\n" +
        "                  , count(E.PROC_INST_ID_) as INSTANCE_COUNT_\n" +
        "                from\n" +
        "                    ACT_RU_EXECUTION E\n" +
        "\n" +
        "                where\n" +
        "                    E.PARENT_ID_ IS NULL\n" +
        "                    \n" +
        "                    \n" +
        "     \n" +
        "\n" +
        "   \n" +
        "                group by\n" +
        "                    E.PROC_DEF_ID_\n" +
        "              ) INSTANCE\n" +
        "          on\n" +
        "              PROCDEF.ID_ = INSTANCE.PROC_DEF_ID_\n" +
        "\n" +
        "          \n" +
        "           \n" +
        "\n" +
        "          \n" +
        "           \n" +
        "          left outer join\n" +
        "              \n" +
        "              (\n" +
        "                select\n" +
        "                    I.PROC_DEF_ID_\n" +
        "                  , I.INCIDENT_TYPE_\n" +
        "                  , count(I.ID_) as INCIDENT_COUNT_\n" +
        "                from\n" +
        "                    ACT_RU_INCIDENT I\n" +
        "\n" +
        "                 WHERE  I.ID_ = I.ROOT_CAUSE_INCIDENT_ID_ \n" +
        "                group by\n" +
        "                    I.PROC_DEF_ID_, I.INCIDENT_TYPE_\n" +
        "              ) INC\n" +
        "          on\n" +
        "              PROCDEF.ID_ = INC.PROC_DEF_ID_\n" +
        "           \n" +
        "\n" +
        "    \n" +
        "\n" +
        "        ) RES\n" +
        "\n" +
        "       \n" +
        "        \n" +
        "   \n" +
        "    order by ID_ asc\n" +
        "    LIMIT ? OFFSET ?";
    public static final String DATABASE_TYPE_NAME = "MySQL";
    protected final SelectStatement fixedSQLStatement;
 
    public FixSubqueryParsingHook() {
        ParseTree parseTree = new SQLParserExecutor(DATABASE_TYPE_NAME, SQL).execute().getRootNode();
        fixedSQLStatement = (SelectStatement) ParseTreeVisitorFactory.newInstance(DATABASE_TYPE_NAME, VisitorRule.valueOf(parseTree.getClass())).visit(parseTree);
    }
 
    @Override
    public void start(String sql) {
 
    }
 
    @Override
    public void finishSuccess(SQLStatement sqlStatement) {
        if (checkSQLStatement(sqlStatement)) {
            SelectStatement selectStatement = (SelectStatement) sqlStatement;
            selectStatement.setParentStatement(fixedSQLStatement.getParentStatement());
            selectStatement.setGroupBy(fixedSQLStatement.getGroupBy().orElse(null));
            selectStatement.setOrderBy(fixedSQLStatement.getOrderBy().orElse(null));
            selectStatement.setWhere(fixedSQLStatement.getWhere().orElse(null));
            selectStatement.setLimit(fixedSQLStatement.getLimit().orElse(null));
            selectStatement.setParameterCount(fixedSQLStatement.getParameterCount());
            selectStatement.setProjections(fixedSQLStatement.getProjections());
            selectStatement.getTableReferences().clear();
            selectStatement.getTableReferences().addAll(fixedSQLStatement.getTableReferences());
        }
    }
 
    private boolean checkSQLStatement(SQLStatement sqlStatement) {
        if (sqlStatement instanceof SelectStatement) {
            SelectStatement selectStatement = (SelectStatement) sqlStatement;
            Collection<TableReferenceSegment> tableReferences = selectStatement.getTableReferences();
            if (selectStatement.getProjections().getProjections().size() == 1) {
                for (TableReferenceSegment tableReference : tableReferences) {
                    if (tableReference.getTableFactor().getTable() instanceof SubqueryTableSegment) {
                        SubqueryTableSegment table = (SubqueryTableSegment) tableReference.getTableFactor().getTable();
                        if ("RES".equals(table.getAlias().orElse(""))) {
                            if (table.getStartIndex() == 50 && table.getStopIndex() == 1372) {
                                return true;
                            }
                        }
                    }
                }
 
            }
        }
 
        return false;
    }
 
    @Override
    public void finishFailure(Exception cause) {
 
    }
}

配置workflow-service/src/main/resources/META-INF/services/org.apache.shardingsphere.sql.parser.hook.ParsingHook文件

com.xxx.basic.workflow.config.shardingsphere.FixSubqueryParsingHook

这个方法可以解决问题,但是SQLStatement是解析好的SQL语句对象,用来判断是否是问题语句和进行改造都颇为麻烦。然后寻找上层的语句替换方案。

方案3:对Camunda实例化的MyBatis的SqlSessionFactory增加语句改造拦截器

Camunda使用xml配置的MyBatis各种数据处理Mapper,而MyBatis可以使用拦截器。首先创建一个通用的SQL语句修改拦截器。

package com.xxx.basic.workflow.config.interceptor;
 
 
import org.apache.ibatis.cache.CacheKey;
import org.apache.ibatis.executor.Executor;
import org.apache.ibatis.mapping.BoundSql;
import org.apache.ibatis.mapping.MappedStatement;
import org.apache.ibatis.mapping.SqlSource;
import org.apache.ibatis.plugin.*;
import org.apache.ibatis.session.ResultHandler;
import org.apache.ibatis.session.RowBounds;
 
import java.util.Collection;
import java.util.Map;
import java.util.Properties;
 
@Intercepts(
    {
        @Signature(type = Executor.class, method = "query", args = {MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class}),
        @Signature(type = Executor.class, method = "query", args = {MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class, CacheKey.class, BoundSql.class}),
    }
)
public class FixSubqueryInterceptor implements Interceptor {
 
    static int MAPPED_STATEMENT_INDEX = 0;
    static int PARAMETER_INDEX = 1;
 
    Map<String, FixSubqueryProcessor> mappedStatementProcessors;
 
    public FixSubqueryInterceptor(Map<String, FixSubqueryProcessor> mappedStatementProcessors) {
        this.mappedStatementProcessors = mappedStatementProcessors;
    }
 
    @Override
    public Object intercept(Invocation invocation) throws Throwable {
        MappedStatement mappedStatement = (MappedStatement) invocation.getArgs()[MAPPED_STATEMENT_INDEX];
        if (mappedStatementProcessors.containsKey(mappedStatement.getId())) {
            invocation.getArgs()[MAPPED_STATEMENT_INDEX] = processMappedStatement(mappedStatement, invocation.getArgs()[PARAMETER_INDEX]);
        }
        Object result = invocation.proceed();
        return result;
    }
 
    protected MappedStatement processMappedStatement(MappedStatement mappedStatement, Object parameter) {
        BoundSql boundSql = mappedStatement.getBoundSql(parameter);
        String newSql = mappedStatementProcessors.get(mappedStatement.getId()).process(boundSql.getSql());
        BoundSql newBoundSql = new BoundSql(mappedStatement.getConfiguration(), newSql, boundSql.getParameterMappings(), boundSql.getParameterObject());
 
        MappedStatement.Builder builder = new MappedStatement.Builder(mappedStatement.getConfiguration(), mappedStatement.getId(), new BoundSqlSqlSource(newBoundSql), mappedStatement.getSqlCommandType());
        builder.resource(mappedStatement.getResource());
        builder.fetchSize(mappedStatement.getFetchSize());
        builder.statementType(mappedStatement.getStatementType());
        builder.keyGenerator(mappedStatement.getKeyGenerator());
        if (mappedStatement.getKeyProperties() != null && mappedStatement.getKeyProperties().length > 0) {
            builder.keyProperty(mappedStatement.getKeyProperties()[0]);
        }
        builder.timeout(mappedStatement.getTimeout());
        builder.parameterMap(mappedStatement.getParameterMap());
        builder.resultMaps(mappedStatement.getResultMaps());
        builder.resultSetType(mappedStatement.getResultSetType());
        builder.cache(mappedStatement.getCache());
        builder.flushCacheRequired(mappedStatement.isFlushCacheRequired());
        builder.useCache(mappedStatement.isUseCache());
        return builder.build();
 
    }
 
    @Override
    public Object plugin(Object target) {
        return Plugin.wrap(target, this);
    }
 
    @Override
    public void setProperties(Properties properties) {
    }
 
    public static class BoundSqlSqlSource implements SqlSource {
        private BoundSql boundSql;
 
        public BoundSqlSqlSource(BoundSql boundSql) {
            this.boundSql = boundSql;
        }
 
        public BoundSql getBoundSql(Object parameterObject) {
            return boundSql;
        }
    }
 
    public static class DefaultFixSubqueryProcessor implements FixSubqueryProcessor {
 
        private String alias;
        private Collection<String> cols;
 
        public DefaultFixSubqueryProcessor(String alias, Collection<String> cols) {
            this.alias = alias;
            this.cols = cols;
        }
 
        @Override
        public String process(String sql) {
            sql = sql.replace(alias + ".*", String.join(",", cols.stream().map(col -> "`" + col + "`").toArray(String[]::new)));
            sql = sql.replace(alias + ".", "");
            return sql;
        }
    }
 
    @FunctionalInterface
    public interface FixSubqueryProcessor {
        String process(String sql);
    }
}

Camunda并没有将SqlSessionFactory做成Bean交给Spring管理,需要通过ProcessEngine的Bean一层层获取。创建配置为其添加拦截器。

package com.xxx.basic.workflow.config;
 
import com.xxx.basic.workflow.config.interceptor.FixSubqueryInterceptor;
import org.apache.ibatis.session.SqlSessionFactory;
import org.camunda.bpm.engine.ProcessEngine;
import org.camunda.bpm.engine.impl.ProcessEngineImpl;
import org.springframework.context.annotation.Configuration;
 
import javax.annotation.PostConstruct;
import javax.annotation.Resource;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
 
@Configuration
public class CamundaFixSubqueryConfig {
 
    static final Map<String, FixSubqueryInterceptor.FixSubqueryProcessor> MAPPED_STATEMENT_PROCESSORS;
 
    static {
        MAPPED_STATEMENT_PROCESSORS = new HashMap<>();
        MAPPED_STATEMENT_PROCESSORS.put("org.camunda.bpm.engine.impl.persistence.entity.ProcessDefinitionStatisticsEntity.selectProcessDefinitionStatistics", new FixSubqueryInterceptor.DefaultFixSubqueryProcessor("RES", Arrays.asList("ID_", "REV_", "CATEGORY_", "NAME_", "KEY_", "VERSION_", "DEPLOYMENT_ID_", "RESOURCE_NAME_", "DGRM_RESOURCE_NAME_", "HAS_START_FORM_KEY_", "SUSPENSION_STATE_", "TENANT_ID_", "VERSION_TAG_", "HISTORY_TTL_", "STARTABLE_", "INSTANCE_COUNT_", "INCIDENT_TYPE_", "INCIDENT_COUNT_")));
    }
 
    @Resource
    private ProcessEngine processEngine;
 
 
    @PostConstruct
    public void init() {
        SqlSessionFactory sqlSessionFactory = ((ProcessEngineImpl) processEngine).getProcessEngineConfiguration().getDbSqlSessionFactory().getSqlSessionFactory();
        sqlSessionFactory.getConfiguration().addInterceptor(new FixSubqueryInterceptor(MAPPED_STATEMENT_PROCESSORS));
    }
}

如此解决方案虽然也是偷梁换柱的方法,但是更为明晰简便。但是每次处理问题语句都需要执行这一段改造逻辑,有些许性能损失。

方案4:在Camunda的ProcessEngine配置初始化MappedStatement时替换问题MappedStatement。(未实现)

MappedStatement是解析XML文件形成,简单研究了一下,想要组织正确的MappedStatement颇为麻烦,因为已经耗费太多精力时间用于解决该问题,并且问题语句是统计语句并不常用,及时止损不再研究。

方案5:升级ShardingSphere 4.1.1到5.x

访问Camunda Web的过程中又出现新的SQL语句解析问题,6个参数只识别5个,导致最后LIMIT语句参数对应错误。而这个复杂的SQL语句怎么改也无法通过。以上所有针对SQL语句偷梁换柱的方案都告吹。

Cause: java.sql.SQLFeatureNotSupportedException: Can not get index from column label `CASE
    WHEN EXISTS
    ( SELECT ID_
      FROM ACT_RU_AUTHORIZATION
      WHERE TYPE_ = 2
      AND (
        USER_ID_ IN (null, '*')
          
      )
    ) THEN 1 ELSE 0 END`.

由于公司使用ShardingSphere 4版本的项目很多,本来不想升级版本,被迫无奈尝试仅仅将该项目升级到5.x。

可以spring配置的最后版本是5.2.1,使用后解决了 Can not find owner from table.,但是又出现新的SQL解析问题。

官方声称5.3解决了这个CASE WHEN的问题,但是5.3以后没有Spring配置,而我们配置文件都放在Nacos中管理,这需要我自己实现各种ShardingSphere配置映射,万一改好了还有新问题呢。

本来就不想升级还这么麻烦了,放弃。

方案6:代理ShardingConnection,遇到需要绕过SQLParserEngine解析的问题SQL,直接使用原始连接

研究HintManager看看是否能通过其跳过复杂语句解析,结果是先解析再路由无法跳过。

那么可以仿照HintManager实现基于ThreadLocal的参数传递,用于指定当前语句执行使用ShardingConnection还是某个原始连接。

把HintManager的代码给ChatGPT,让其帮忙仿写的ShardingSkippingParseManager

package com.xxx.basic.workflow.config.shardingsphere;
 
import com.google.common.base.Preconditions;
import lombok.AccessLevel;
import lombok.NoArgsConstructor;
 
@NoArgsConstructor(access = AccessLevel.PRIVATE)
public final class ShardingSkippingParseManager implements AutoCloseable {
 
    private static final ThreadLocal<String> CONNECTION_NAME_HOLDER = new ThreadLocal<>();
 
    /**
     * Get a new instance of {@code ShardingSkippingParseManager}.
     *
     * @return  {@code ShardingSkippingParseManager} instance
     */
    public static ShardingSkippingParseManager getInstance() {
        Preconditions.checkState(null == CONNECTION_NAME_HOLDER.get(), "Connection name has previous value, please clear first.");
        ShardingSkippingParseManager result = new ShardingSkippingParseManager();
        CONNECTION_NAME_HOLDER.set("");
        return result;
    }
 
    /**
     * Set data source name.
     *
     * @param dataSourceName data source name
     */
    public void setDataSourceName(final String dataSourceName) {
        CONNECTION_NAME_HOLDER.set(dataSourceName);
    }
 
    /**
     * Get data source name.
     *
     * @return data source name
     */
    public static String getDataSourceName() {
        return CONNECTION_NAME_HOLDER.get();
    }
 
    /**
     * Clear threadlocal for data source name.
     */
    public static void clear() {
        CONNECTION_NAME_HOLDER.remove();
    }
 
    @Override
    public void close() {
        ShardingSkippingParseManager.clear();
    }
}

ShardingDataSource、ShardingConnection等不允许继承但可以代理。分别实现其代理类,实现切换Connection 的逻辑。

package com.xxx.basic.workflow.config.shardingsphere;
 
import org.apache.shardingsphere.shardingjdbc.jdbc.core.connection.ShardingConnection;
import org.springframework.util.StringUtils;
 
import java.sql.*;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.Executor;
 
public class ShardingSkippingParseConnection implements Connection {
    private ShardingConnection shardingConnection;
 
    public ShardingSkippingParseConnection(ShardingConnection shardingConnection) {
        this.shardingConnection = shardingConnection;
    }
 
    public ShardingConnection getShardingConnection() {
        return shardingConnection;
    }
 
    public Connection getSkippingParseConnection() throws SQLException {
        if (StringUtils.hasText(ShardingSkippingParseManager.getDataSourceName())) {
            return shardingConnection.getConnection(ShardingSkippingParseManager.getDataSourceName());
        }
        return shardingConnection;
    }
 
    // ...实现其他代理方法
}
package com.xxx.basic.workflow.config.shardingsphere;
 
import org.apache.shardingsphere.shardingjdbc.jdbc.core.datasource.ShardingDataSource;
 
import javax.sql.DataSource;
import java.io.PrintWriter;
import java.sql.Connection;
import java.sql.SQLException;
import java.sql.SQLFeatureNotSupportedException;
import java.util.logging.Logger;
 
public class ShardingSkippingParseDataSource implements DataSource {
    private ShardingDataSource shardingDataSource;
 
    public ShardingSkippingParseDataSource(ShardingDataSource shardingDataSource) {
        this.shardingDataSource = shardingDataSource;
    }
 
    public ShardingDataSource getShardingDataSource() {
        return shardingDataSource;
    }
 
    @Override
    public Connection getConnection() throws SQLException {
        return new ShardingSkippingParseConnection(shardingDataSource.getConnection());
    }
    // ...实现其他代理方法
}

写个新的MyBatis拦截器,判断是否是问题语句设置ShardingSkippingParseManager的数据源状态。

package com.xxx.basic.workflow.config.shardingsphere;
 
 
import lombok.extern.slf4j.Slf4j;
import org.apache.ibatis.cache.CacheKey;
import org.apache.ibatis.executor.Executor;
import org.apache.ibatis.mapping.BoundSql;
import org.apache.ibatis.mapping.MappedStatement;
import org.apache.ibatis.plugin.*;
import org.apache.ibatis.session.ResultHandler;
import org.apache.ibatis.session.RowBounds;
 
import java.util.Map;
import java.util.Properties;
 
@Intercepts(
    {
        @Signature(type = Executor.class, method = "query", args = {MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class}),
        @Signature(type = Executor.class, method = "query", args = {MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class, CacheKey.class, BoundSql.class}),
    }
)
@Slf4j
public class ShardingSkippingParseInterceptor implements Interceptor {
 
    static int MAPPED_STATEMENT_INDEX = 0;
    static int PARAMETER_INDEX = 1;
 
    Map<String, String> mappedStatementNames;
 
    public ShardingSkippingParseInterceptor(Map<String, String> mappedStatementNames) {
        this.mappedStatementNames = mappedStatementNames;
    }
 
    @Override
    public Object intercept(Invocation invocation) throws Throwable {
        MappedStatement mappedStatement = (MappedStatement) invocation.getArgs()[MAPPED_STATEMENT_INDEX];
        if (mappedStatementNames.containsKey(mappedStatement.getId())) {
            try (ShardingSkippingParseManager manager = ShardingSkippingParseManager.getInstance()) {
                manager.setDataSourceName(mappedStatementNames.get(mappedStatement.getId()));
                log.info("ShardingSkippingParseInterceptor: 针对\"{}\"的MappedStatement跳过SQL解析器使用\"{}\"数据源", mappedStatement.getId(), mappedStatementNames.get(mappedStatement.getId()));
                return invocation.proceed();
            }
        } else {
            return invocation.proceed();
        }
    }
 
    @Override
    public Object plugin(Object target) {
        return Plugin.wrap(target, this);
    }
 
    @Override
    public void setProperties(Properties properties) {
    }
 
}

自动配置替换DataSource的Bean和改造ProcessEngine使用的SqlSessionFactory。

package com.xxx.basic.workflow.config;
 
import com.xxx.basic.workflow.config.shardingsphere.ShardingSkippingParseDataSource;
import com.xxx.basic.workflow.config.shardingsphere.ShardingSkippingParseInterceptor;
import org.apache.ibatis.session.SqlSessionFactory;
import org.apache.shardingsphere.shardingjdbc.jdbc.core.datasource.ShardingDataSource;
import org.camunda.bpm.engine.impl.ProcessEngineImpl;
import org.camunda.bpm.spring.boot.starter.CamundaBpmAutoConfiguration;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.beans.factory.config.BeanPostProcessor;
import org.springframework.boot.autoconfigure.AutoConfigureBefore;
import org.springframework.context.annotation.Configuration;
 
import java.util.HashMap;
import java.util.Map;
 
@Configuration
@AutoConfigureBefore(CamundaBpmAutoConfiguration.class)
public class CamundaShardingSkippingParseConfig implements BeanPostProcessor {
 
    @Value("${camunda.sharding.skipping-parse.default-data-source:slave}")
    private String defaultDataSource = "slave";
 
    private Map<String, String> createMappedStatementNames() {
        Map<String, String> names = new HashMap<>();
        names.put("org.camunda.bpm.engine.impl.persistence.entity.ProcessDefinitionStatisticsEntity.selectProcessDefinitionStatistics", defaultDataSource);
        names.put("org.camunda.bpm.engine.impl.persistence.entity.ProcessDefinitionStatisticsEntity.selectActivityStatistics", defaultDataSource);
 
        return names;
    }
 
    @Override
    public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
        return bean; // just return the original bean here
    }
 
    @Override
    public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
        if (bean instanceof ShardingDataSource) {
            // Replace the DataSource bean with a proxy
            return new ShardingSkippingParseDataSource((ShardingDataSource) bean);
        } else if (bean instanceof ProcessEngineImpl) {
            ProcessEngineImpl processEngine = (ProcessEngineImpl) bean;
            processEngine.getProcessEngineConfiguration().getSessionFactories();
            SqlSessionFactory sqlSessionFactory = processEngine.getProcessEngineConfiguration().getDbSqlSessionFactory().getSqlSessionFactory();
            sqlSessionFactory.getConfiguration().addInterceptor(new ShardingSkippingParseInterceptor(createMappedStatementNames()));// assuming ProxyDataSource is your proxy class
            return processEngine;
        } else {
            return bean; // just return the original bean for other beans
        }
    }
 
}

如果使用了webapp,还需要改造Cockpit中另外一个SqlSessionFactory 

package com.xxx.basic.workflow.test.config;
 
import com.xxx.basic.workflow.config.shardingsphere.ShardingSkippingParseInterceptor;
import org.camunda.bpm.cockpit.Cockpit;
import org.camunda.bpm.cockpit.db.CommandExecutor;
import org.camunda.bpm.cockpit.impl.DefaultCockpitRuntimeDelegate;
import org.camunda.bpm.cockpit.impl.db.QuerySessionFactory;
import org.camunda.bpm.engine.ProcessEngine;
import org.camunda.bpm.engine.ProcessEngineException;
import org.camunda.bpm.engine.impl.ProcessEngineImpl;
import org.camunda.bpm.engine.impl.cfg.ProcessEngineConfigurationImpl;
import org.camunda.bpm.engine.impl.interceptor.Command;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Configuration;
 
import javax.annotation.PostConstruct;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
 
@Configuration
public class CamundaCockpitShardingSkippingParseConfig {
 
    @Value("${camunda.sharding.skipping-parse.default-data-source:slave}")
    private String defaultDataSource = "slave";
 
    private Map<String, String> createMappedStatementNames() {
        Map<String, String> names = new HashMap<>();
        names.put("cockpit.base.selectIncidentWithCauseAndRootCauseIncidents", defaultDataSource);
        return names;
    }
 
    @PostConstruct
    public void setup() {
        Cockpit.setCockpitRuntimeDelegate(new SkippingParseCockpitRuntimeDelegate(new ShardingSkippingParseInterceptor(createMappedStatementNames())));
    }
 
 
    public static class SkippingParseCockpitRuntimeDelegate extends DefaultCockpitRuntimeDelegate {
 
        ShardingSkippingParseInterceptor shardingSkippingParseInterceptor;
 
        public SkippingParseCockpitRuntimeDelegate(ShardingSkippingParseInterceptor shardingSkippingParseInterceptor) {
            this.shardingSkippingParseInterceptor = shardingSkippingParseInterceptor;
        }
 
        @Override
        protected CommandExecutor createCommandExecutor(String processEngineName) {
 
            ProcessEngine processEngine = getProcessEngine(processEngineName);
            if (processEngine == null) {
                throw new ProcessEngineException("No process engine with name " + processEngineName + " found.");
            }
 
            ProcessEngineConfigurationImpl processEngineConfiguration = ((ProcessEngineImpl) processEngine).getProcessEngineConfiguration();
            List<String> mappingFiles = getMappingFiles();
 
            return new SkippingParseCommandExecutorImpl(processEngineConfiguration, mappingFiles, shardingSkippingParseInterceptor);
        }
    }
 
    public static class SkippingParseCommandExecutorImpl implements CommandExecutor {
        private QuerySessionFactory sessionFactory;
 
        public SkippingParseCommandExecutorImpl(ProcessEngineConfigurationImpl processEngineConfiguration, List<String> mappingFiles, ShardingSkippingParseInterceptor shardingSkippingParseInterceptor) {
            sessionFactory = new QuerySessionFactory();
            sessionFactory.initFromProcessEngineConfiguration(processEngineConfiguration, mappingFiles);
            sessionFactory.getDbSqlSessionFactory().getSqlSessionFactory().getConfiguration().addInterceptor(shardingSkippingParseInterceptor);
        }
 
        @Override
        public <T> T executeCommand(Command<T> command) {
            return sessionFactory.getCommandExecutorTxRequired().execute(command);
        }
    }
}

总结

把Camunda与ShardingSphere结合起来费了老劲,但是展示了成熟的程序员不做选择全都要的精神,也体现劳动人民的巧(糊)妙(弄)智慧。未来想要将工作流历史数据按日期分表不知道还会出现什么新的问题。

相关的文章

出现 Can not find owner from table报错。和shardingsphere一起使用的时候。 · Issue #2585 · baomidou/mybatis-plus · GitHub

SpringBoot集成Mybatis自定义拦截器,实现拼接sql和修改(二)_springboot 拦截修改sql与分页冲突-CSDN博客

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值