欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

Spring+MyBatis实现数据读写分离的实例代码

程序员文章站 2023-11-12 19:24:58
本文介绍了spring boot + mybatis读写分离,有需要了解spring+mybatis读写分离的朋友可参考。希望此文章对各位有所帮助。 其最终实现功能:...

本文介绍了spring boot + mybatis读写分离,有需要了解spring+mybatis读写分离的朋友可参考。希望此文章对各位有所帮助。

其最终实现功能:

  1. 默认更新操作都使用写数据源
  2. 读操作都使用slave数据源
  3. 特殊设置:可以指定要使用的数据源类型及名称(如果有名称,则会根据名称使用相应的数据源)

其实现原理如下:

  1. 通过spring aop对dao层接口进行拦截,并对需要指定数据源的接口在thradlocal中设置其数据源类型及名称
  2. 通过mybatsi的插件,对根据更新或者查询操作在threadlocal中设置数据源(dao层没有指定的情况下)
  3. 继承abstractroutingdatasource类。

在此直接写死使用hikaricp作为数据源

其实现步骤如下:

  1. 定义其数据源配置文件并进行解析为数据源
  2. 定义abstractroutingdatasource类及其它注解
  3. 定义aop拦截
  4. 定义mybatis插件
  5. 整合在一起

1.配置及解析类

其配置参数直接使用hikaricp的配置,其具体参数可以参考hikaricp

在此使用yaml格式,名称为datasource.yaml,内容如下:

dds:
 write:
  jdbcurl: jdbc:mysql://localhost:3306/order
  password: liu123
  username: root
  maxpoolsize: 10
  minidle: 3
  poolname: master
 read:
  - jdbcurl: jdbc:mysql://localhost:3306/test
   password: liu123
   username: root
   maxpoolsize: 10
   minidle: 3
   poolname: slave1
  - jdbcurl: jdbc:mysql://localhost:3306/test2
   password: liu123
   username: root
   maxpoolsize: 10
   minidle: 3
   poolname: slave2

定义该配置所对应的bean,名称为dbconfig,内容如下:

@component
@configurationproperties(locations = "classpath:datasource.yaml", prefix = "dds")
public class dbconfig {
  private list<hikariconfig> read;
  private hikariconfig write;

  public list<hikariconfig> getread() {
    return read;
  }

  public void setread(list<hikariconfig> read) {
    this.read = read;
  }

  public hikariconfig getwrite() {
    return write;
  }

  public void setwrite(hikariconfig write) {
    this.write = write;
  }
}

把配置转换为datasource的工具类,名称:datasourceutil,内容如下:

import com.zaxxer.hikari.hikariconfig;
import com.zaxxer.hikari.hikaridatasource;

import javax.sql.datasource;
import java.util.arraylist;
import java.util.list;

public class datasourceutil {
  public static datasource getdatasource(hikariconfig config) {
    return new hikaridatasource(config);
  }

  public static list<datasource> getdatasource(list<hikariconfig> configs) {
    list<datasource> result = null;
    if (configs != null && configs.size() > 0) {
      result = new arraylist<>(configs.size());
      for (hikariconfig config : configs) {
        result.add(getdatasource(config));
      }
    } else {
      result = new arraylist<>(0);
    }

    return result;
  }
}

2.注解及动态数据源

定义注解@datasource,其用于需要对个别方法指定其要使用的数据源(如某个读操作需要在master上执行,但另一读方法b需要在读数据源的具体一台上面执行)

@retention(retentionpolicy.runtime)
@target(elementtype.method)
public @interface datasource {
  /**
   * 类型,代表是使用读还是写
   * @return
   */
  datasourcetype type() default datasourcetype.write;

  /**
   * 指定要使用的datasource的名称
   * @return
   */
  string name() default "";
}

定义数据源类型,分为两种:read,write,内容如下:

public enum datasourcetype {
  read, write;
}

定义保存这此共享信息的类dynamicdatasourceholder,在其中定义了两个threadlocal和一个map,holder用于保存当前线程的数据源类型(读或者写),pool用于保存数据源名称(如果指定),其内容如下:

import java.util.map;
import java.util.concurrent.concurrenthashmap;

public class dynamicdatasourceholder {
  private static final map<string, datasourcetype> cache = new concurrenthashmap<>();
  private static final threadlocal<datasourcetype> holder = new threadlocal<>();
  private static final threadlocal<string> pool = new threadlocal<>();

  public static void puttocache(string key, datasourcetype datasourcetype) {
    cache.put(key,datasourcetype);
  }

  public static datasourcetype getfromcach(string key) {
    return cache.get(key);
  }

  public static void putdatasource(datasourcetype datasourcetype) {
    holder.set(datasourcetype);
  }

  public static datasourcetype getdatasource() {
    return holder.get();
  }

  public static void putpoolname(string name) {
    if (name != null && name.length() > 0) {
      pool.set(name);
    }
  }

  public static string getpoolname() {
    return pool.get();
  }

  public static void cleardatasource() {
    holder.remove();
    pool.remove();
  }
}

动态数据源类为dynamicdatasoruce,其继承自abstractroutingdatasource,可以根据返回的key切换到相应的数据源,其内容如下:

import com.zaxxer.hikari.hikaridatasource;
import org.springframework.jdbc.datasource.lookup.abstractroutingdatasource;

import javax.sql.datasource;
import java.util.hashmap;
import java.util.list;
import java.util.map;
import java.util.concurrent.concurrenthashmap;
import java.util.concurrent.threadlocalrandom;

public class dynamicdatasource extends abstractroutingdatasource {
  private datasource writedatasource;
  private list<datasource> readdatasource;
  private int readdatasourcesize;
  private map<string, string> datasourcemapping = new concurrenthashmap<>();

  @override
  public void afterpropertiesset() {
    if (this.writedatasource == null) {
      throw new illegalargumentexception("property 'writedatasource' is required");
    }
    setdefaulttargetdatasource(writedatasource);
    map<object, object> targetdatasource = new hashmap<>();
    targetdatasource.put(datasourcetype.write.name(), writedatasource);
    string poolname = ((hikaridatasource)writedatasource).getpoolname();
    if (poolname != null && poolname.length() > 0) {
      datasourcemapping.put(poolname,datasourcetype.write.name());
    }
    if (this.readdatasource == null) {
      readdatasourcesize = 0;
    } else {
      for (int i = 0; i < readdatasource.size(); i++) {
        targetdatasource.put(datasourcetype.read.name() + i, readdatasource.get(i));
        poolname = ((hikaridatasource)readdatasource.get(i)).getpoolname();
        if (poolname != null && poolname.length() > 0) {
          datasourcemapping.put(poolname,datasourcetype.read.name() + i);
        }
      }
      readdatasourcesize = readdatasource.size();
    }
    settargetdatasources(targetdatasource);
    super.afterpropertiesset();
  }


  @override
  protected object determinecurrentlookupkey() {
    datasourcetype datasourcetype = dynamicdatasourceholder.getdatasource();
    string datasourcename = null;
    if (datasourcetype == null ||datasourcetype == datasourcetype.write || readdatasourcesize == 0) {
      datasourcename = datasourcetype.write.name();
    } else {
      string poolname = dynamicdatasourceholder.getpoolname();
      if (poolname == null) {
        int idx = threadlocalrandom.current().nextint(0, readdatasourcesize);
        datasourcename = datasourcetype.read.name() + idx;
      } else {
        datasourcename = datasourcemapping.get(poolname);
      }
    }
    dynamicdatasourceholder.cleardatasource();
    return datasourcename;
  }

  public void setwritedatasource(datasource writedatasource) {
    this.writedatasource = writedatasource;
  }

  public void setreaddatasource(list<datasource> readdatasource) {
    this.readdatasource = readdatasource;
  }
}

3.aop拦截

如果在相应的dao层做了自定义配置(指定数据源),则在些处理。解析相应方法上的@datasource注解,如果存在,并把相应的信息保存至上面的dynamicdatasourceholder中。在此对com.hfjy.service.order.dao包进行做拦截。内容如下:

import com.hfjy.service.order.anno.datasource;
import com.hfjy.service.order.wr.dynamicdatasourceholder;
import org.aspectj.lang.joinpoint;
import org.aspectj.lang.annotation.after;
import org.aspectj.lang.annotation.aspect;
import org.aspectj.lang.annotation.before;
import org.aspectj.lang.annotation.pointcut;
import org.aspectj.lang.reflect.methodsignature;
import org.springframework.stereotype.component;

import java.lang.reflect.method;

/**
 * 使用aop拦截,对需要特殊方法可以指定要使用的数据源名称(对应为连接池名称)
 */
@aspect
@component
public class dynamicdatasourceaspect {

  @pointcut("execution(public * com.hfjy.service.order.dao.*.*(*))")
  public void dynamic(){}

  @before(value = "dynamic()")
  public void beforeopt(joinpoint point) {
    object target = point.gettarget();
    string methodname = point.getsignature().getname();
    class<?>[] clazz = target.getclass().getinterfaces();
    class<?>[] parametertype = ((methodsignature)point.getsignature()).getmethod().getparametertypes();
    try {
      method method = clazz[0].getmethod(methodname,parametertype);
      if (method != null && method.isannotationpresent(datasource.class)) {
        datasource datasource = method.getannotation(datasource.class);
        dynamicdatasourceholder.putdatasource(datasource.type());
        string poolname = datasource.name();
        dynamicdatasourceholder.putpoolname(poolname);
        dynamicdatasourceholder.puttocache(clazz[0].getname() + "." + methodname, datasource.type());
      }
    } catch (exception e) {
      e.printstacktrace();
    }
  }

  @after(value = "dynamic()")
  public void afteropt(joinpoint point) {
    dynamicdatasourceholder.cleardatasource();
  }
}

4.mybatis插件

如果在dao层没有指定相应的要使用的数据源,则在此进行拦截,根据是更新还是查询设置数据源的类型,内容如下:

import org.apache.ibatis.executor.executor;
import org.apache.ibatis.mapping.mappedstatement;
import org.apache.ibatis.mapping.sqlcommandtype;
import org.apache.ibatis.plugin.*;
import org.apache.ibatis.session.resulthandler;
import org.apache.ibatis.session.rowbounds;

import java.util.properties;

@intercepts({
    @signature(type = executor.class, method = "update", args = {mappedstatement.class, object.class}),
    @signature(type = executor.class, method = "query", args = {mappedstatement.class, object.class,
        rowbounds.class, resulthandler.class})
})
public class dynamicdatasourceplugin implements interceptor {

  @override
  public object intercept(invocation invocation) throws throwable {
    mappedstatement ms = (mappedstatement)invocation.getargs()[0];
    datasourcetype datasourcetype = null;
    if ((datasourcetype = dynamicdatasourceholder.getfromcach(ms.getid())) == null) {
      if (ms.getsqlcommandtype().equals(sqlcommandtype.select)) {
        datasourcetype = datasourcetype.read;
      } else {
        datasourcetype = datasourcetype.write;
      }
      dynamicdatasourceholder.puttocache(ms.getid(), datasourcetype);
    }
    dynamicdatasourceholder.putdatasource(datasourcetype);
    return invocation.proceed();
  }

  @override
  public object plugin(object target) {
    if (target instanceof executor) {
      return plugin.wrap(target, this);
    } else {
      return target;
    }
  }

  @override
  public void setproperties(properties properties) {

  }
}

5.整合

在里面定义mybatis要使用的内容及datasource,内容如下:

import com.hfjy.service.order.wr.dbconfig;
import com.hfjy.service.order.wr.datasourceutil;
import com.hfjy.service.order.wr.dynamicdatasource;
import org.apache.ibatis.session.sqlsessionfactory;
import org.mybatis.spring.sqlsessionfactorybean;
import org.mybatis.spring.annotation.mapperscan;
import org.springframework.beans.factory.annotation.qualifier;
import org.springframework.context.annotation.bean;
import org.springframework.context.annotation.configuration;
import org.springframework.core.io.classpathresource;
import org.springframework.core.io.support.pathmatchingresourcepatternresolver;
import org.springframework.jdbc.datasource.datasourcetransactionmanager;

import javax.annotation.resource;
import javax.sql.datasource;

@configuration
@mapperscan(value = "com.hfjy.service.order.dao", sqlsessionfactoryref = "sqlsessionfactory")
public class datasourceconfig {
  @resource
  private dbconfig dbconfig;

  @bean(name = "datasource")
  public dynamicdatasource datasource() {
    dynamicdatasource datasource = new dynamicdatasource();
    datasource.setwritedatasource(datasourceutil.getdatasource(dbconfig.getwrite()));
    datasource.setreaddatasource(datasourceutil.getdatasource(dbconfig.getread()));
    return datasource;
  }

  @bean(name = "transactionmanager")
  public datasourcetransactionmanager datasourcetransactionmanager(@qualifier("datasource") datasource datasource) {
    return new datasourcetransactionmanager(datasource);
  }

  @bean(name = "sqlsessionfactory")
  public sqlsessionfactory sqlsessionfactory(@qualifier("datasource") datasource datasource) throws exception {
    sqlsessionfactorybean sessionfactorybean = new sqlsessionfactorybean();
    sessionfactorybean.setconfiglocation(new classpathresource("mybatis-config.xml"));
    sessionfactorybean.setmapperlocations(new pathmatchingresourcepatternresolver()
        .getresources("classpath*:mapper/*.xml"));
    sessionfactorybean.setdatasource(datasource);
    return sessionfactorybean.getobject();
  }
}

如果不清楚,可以查看github上源码orderdemo

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。