概述
SpringBatch是一个轻量级的,完善的批处理框架,目的是帮助企业建立健壮、高效的批处理应用。
SpringBatch提供了大量可重用的组件,包括日志追踪、事务、任务重启、跳过、重复资源管理。对于大数据量和高性能的批处理任务,SpringBatch同样提供了高级功能和特性来支持比如分区功能、远程功能。总之。通过SpringBatch能够支持简单的、复杂的和大数据量的批处理作业。
需要注意的是,SpringBatch是一个批处理应用框架,而不是调度框架,需要与调度框架合作和构建完成的批处理任务。
框架主要有以下功能:
- Transaction management 事务管理
- Chunk based processing 基于块的处理
- Declarative 1/0 声明式的输入输出
- Start/Stop/Restart retry/skip
框架具备的角色:
- JobLauncher是任务启动器,通过它来启动任务,可以看作是程序的入口
- Job代表着一个具体的任务
- Step代表着一个具体的步骤,一个Job可以包含多个step
- JobRepository存储数据的位置,可以看作一个数据库的接口
框架处理架构
任务的处理是在step阶段定义的,在step中,需要定义数据的读取、数据处理、数据写出操作。
- Reader:主要任务是定义数据的读取操作,包括读取文件的位置,对读取首先进行的划分,将读取到的文件映射到相关对象的属性。
- Process:这里是真正对数据进行处理的地方,数据的处理在这里定义。
- Writer:定于i数据的输出操作,包括将数据写入到数据库
简单入门范例
第一步:引入依赖
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-batch</artifactId>
</dependency>
第二步:编辑配置文件
spring:
datasource:
username: root
password: 123456
url: jdbc:mysql://127.0.0.1:3306/test_springbatch?allowPublicKeyRetrieval=true&useSSL=true
driver-class-name: com.mysql.cj.jdbc.Driver
# 初始化数据库,文件在依赖jar包中
schema: classpath:org/springframework/batch/core/schema-mysql.sql
#三个值 always:总是创建 embedded 只初始化内部数据库 never 不创建
initialization-mode: always
第三步:配置任务配置类
@Slf4j
@Configuration
@EnableBatchProcessing // 开启批处理进程
public class JobConfiguration {
@Autowired
private JobBuilderFactory jobBuilderFactory; // 注入任务构建工厂
@Autowired
private StepBuilderFactory stepBuilderFactory; // 注入步骤构建工厂
@Bean
public Job FristJob(){
return jobBuilderFactory.get("FristJob").start(stepOne()).build();
}
@Bean
public Step stepOne(){
return stepBuilderFactory.get("stepOne").tasklet((new Tasklet() {
@Override
public RepeatStatus execute(StepContribution stepContribution, ChunkContext chunkContext) throws Exception {
// 任务执行体
log.info("执行的线程名称为:{},执行内容为:{}",Thread.currentThread().getName(),"步骤一");
return RepeatStatus.FINISHED;
}
})).build();
}
}
启动结果:
需要注意的是:默认情况下,配置的批任务是项目启动后默认执行的,如果不想默认启动可以在配置文件中进行更改。
spring:
batch:
job:
enabled:false
核心套件
JobInstance
Job任务的每一次执行都会对应一个instance,属于作业执行过程中的概念。
JobExecution
每次的job执行不会保证百分百成功,那么就会对应一个Execution,执行当任务执行成功时,给定的与执行相对应的jobinstance才被视为完成。
JobParameters
如果一个job每天运行一次,那么每天都会有一个JobInstance。但是每天运行的job的定义都是一样的,如何来区分同一个job的不同JobInstance。spring batch中提供的用来标识一个jobinstance的东西是:JobParameters。 JobParameters对象包含一组用于启动批处理作业的参数,它可以在运行期间用于识别或甚至用作参考数据
StepExecution
表示一次执行Step, 每次运行一个Step时都会创建一个新的StepExecution,类似于JobExecution。 但是,某个步骤可能由于其之前的步骤失败而无法执行。 且仅当Step实际启动时才会创建StepExecution。一次step执行的实例由StepExecution类的对象表示。 每个StepExecution都包含对其相应步骤的引用以及JobExecution和事务相关的数据,例如提交和回滚计数以及开始和结束时间。 此外,每个步骤执行都包含一个ExecutionContext,其中包含开发人员需要在批处理运行中保留的任何数据,例如重新启动所需的统计信息或状态信息。
ExecutionContext
每一个StepExecution 的执行环境也就是一个容器的概念。它包含一系列的键值对。
进阶代码
测试场景:入库数据,一个小demo
Student实体类
@Table(name = "student")
@Entity
@Data
@AllArgsConstructor
@NoArgsConstructor
public class Student {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private String id;
private String name;
private Integer age;
private Integer cid;
}
监听器
@Slf4j
@Component
public class JobListener implements JobExecutionListener {
private final ThreadPoolTaskExecutor threadPoolTaskExecutor;
private long startTime;
@Autowired
public JobListener(ThreadPoolTaskExecutor taskExecutor) {
this.threadPoolTaskExecutor = taskExecutor;
}
@Override
public void beforeJob(JobExecution jobExecution) {
startTime = System.currentTimeMillis();
// 获取执行参数
log.info("job before " + jobExecution.getJobParameters());
}
@Override
public void afterJob(JobExecution jobExecution) {
log.info("JOB STATUS : {}", jobExecution.getStatus());
if (jobExecution.getStatus() == BatchStatus.COMPLETED) {
log.info("JOB FINISHED");
threadPoolTaskExecutor.destroy();
} else if (jobExecution.getStatus() == BatchStatus.FAILED) {
log.info("JOB FAILED");
}
}
}
任务管理
@Slf4j
@Component
public class DataBatchJob {
// 构建工厂
private final JobBuilderFactory jobBuilderFactory;
private final StepBuilderFactory stepBuilderFactory;
private final EntityManagerFactory emf;
private final JobListener jobListener;
@Autowired
public DataBatchJob(JobBuilderFactory jobBuilderFactory, StepBuilderFactory stepBuilderFactory, EntityManagerFactory emf, JobListener jobListener){
this.jobBuilderFactory = jobBuilderFactory;
this.stepBuilderFactory = stepBuilderFactory;
this.emf = emf;
this.jobListener = jobListener;
}
/*
* @Date: 2023/11/7 16:18
* @Description: 基本的job对象
**/
public Job dataHandleJob(){
return jobBuilderFactory.get("dataHandleJob")
.incrementer(new RunIdIncrementer()) // 自增序列
.start(stepOne())
.listener(jobListener)
.build();
}
private Step stepOne() {
return stepBuilderFactory.get("stepOne")
.<Student,Student>chunk(10)
.faultTolerant().retryLimit(3).retry(Exception.class)
/*核心步骤*/
.reader(getDataReader())
.processor(getDataProcessor())
.writer(getDataWriter())
.build();
}
private ItemWriter<? super Student> getDataWriter() {
return list -> {
for (Student student : list) {
// 模拟写数据,为了演示的简单就不写入数据库了
log.info("write data : " + student);
}
};
}
/*
* @Date: 2023/11/7 16:50
* @Description: 处理数据
**/
private ItemProcessor<? super Student,? extends Student> getDataProcessor() {
// 处理阶段
return student -> {
log.info("processor data : " + student.toString());
return student;
};
}
private ItemReader<? extends Student> getDataReader() {
/*// 可选择JPA、JDBC、JMS提供数据
JpaPagingItemReader<Student> reader = new JpaPagingItemReader<>();
//JdbcPagingItemReader<Student> reader = new JdbcPagingItemReader();
try{
JpaNativeQueryProvider<Student> queryProvider = new JpaNativeQueryProvider<>();
//MySqlPagingQueryProvider provider = new MySqlPagingQueryProvider();
queryProvider.setSqlQuery("SELECT * FROM student");
queryProvider.setEntityClass(Student.class);
queryProvider.afterPropertiesSet();
// 分页读取
reader.setPageSize(30);
reader.setQueryProvider(queryProvider);
reader.afterPropertiesSet();
//所有ItemReader和ItemWriter实现都会在ExecutionContext提交之前将其当前状态存储在其中
reader.setSaveState(true);
} catch (Exception e) {
throw new RuntimeException(e);
}
return reader;*/
Student st = new Student("1","张三",18,1);
List<Student> data = Arrays.asList(st, st, st);
Iterator<Student> iterator = data.iterator();
IteratorItemReader<Student> reader = new IteratorItemReader(iterator);
return reader;
}
}
定时任务调度
@Slf4j
@Component
public class TimeTask {
private final JobLauncher jobLauncher;
private final DataBatchJob dataBatchJob;
@Autowired
public TimeTask(JobLauncher jobLauncher, DataBatchJob dataBatchJob) {
this.jobLauncher = jobLauncher;
this.dataBatchJob = dataBatchJob;
}
@Scheduled(cron = "0/10 * * * * ?")
public void runBatch() throws Exception {
log.info("定时任务执行了...");
// 在运行一个job的时候需要添加至少一个参数,这个参数最后会被写到batch_job_execution_params表中,
// 不添加这个参数的话,job不会运行,并且这个参数在表中中不能重复,若设置的参数已存在表中,则会抛出异常,
// 所以这里才使用时间戳作为参数
JobParameters jobParameters = new JobParametersBuilder()
.addLong("timestamp", System.currentTimeMillis())
.toJobParameters();
Job job = dataBatchJob.dataHandleJob();
JobExecution execution = jobLauncher.run(job, jobParameters);
log.info("定时任务结束. Exit Status : {}", execution.getStatus());
}
}
用法展示
数据读取
itemReader
作用:用于框架的数据输入,但是提供的方式很多,包含文件读取、xml读取、db读取、mq读取,jms读取等。
实现ItemReader接口
Student st = new Student("1","张三",18,1);
List<Student> data = Arrays.asList(st, st, st);
Iterator<Student> iterator = data.iterator();
IteratorItemReader<Student> reader = new IteratorItemReader(iterator);
return reader;
从数据库中读取数据
@Bean
public JdbcPagingItemReader<User> jdbcReader() {
JdbcPagingItemReader<User> reader=new JdbcPagingItemReader<>();
reader.setDataSource(dataSource);//设置数据源
reader.setFetchSize(3); //设置每次抓取的数量(分页的大小)
reader.setRowMapper(new RowMapper<User>() { //转换器,将ResultSet 转换为User对象
@Override
public User mapRow(ResultSet rs, int rowNum) throws SQLException {
User user=new User();
user.setId(rs.getInt("id"));
user.setName(rs.getString("name"));
return user;
}
});
MySqlPagingQueryProvider provider=new MySqlPagingQueryProvider();
provider.setSelectClause("id,name");//需要返回的字段
provider.setFromClause("from user"); //从什么表查
provider.setWhereClause("where id >3"); //条件
Map<String, Order> sort=new HashMap<>(); //排序
sort.put("id", Order.DESCENDING);//设置降序
provider.setSortKeys(sort);
reader.setQueryProvider(provider);//设置sql查询相关信息
return reader;
}
这里jdbcReader是我们的核心,利用JdbcPagingItemReader读取数据。
从文件中读取数据
@Bean
@StepScope
public FlatFileItemReader<User> fileReader2() {
FlatFileItemReader reader=new FlatFileItemReader();
reader.setResource(new ClassPathResource("user.txt"));//设置要读取的文件位置
reader.setLinesToSkip(1);//跳过第一行,这里第一行是表头
//数据解析
DelimitedLineTokenizer tokenizer=new DelimitedLineTokenizer();
tokenizer.setNames(new String[]{"id","name"});//按照位置与文件中的字段对应
// tokenizer.setDelimiter(",");//设置分割字符串,默认是英文逗号,可以设置其它字符
//把解析的数据映射为对象
DefaultLineMapper<User> mapper=new DefaultLineMapper<>();
mapper.setLineTokenizer(tokenizer);
mapper.setFieldSetMapper(new FieldSetMapper<User>() {
@Override
public User mapFieldSet(FieldSet fieldSet) throws BindException {
User user=new User();
user.setId(fieldSet.readInt("id"));
user.setName(fieldSet.readString("name"));
return user;
}
});
mapper.afterPropertiesSet();
reader.setLineMapper(mapper);//行转换器
return reader;
}
}
从xml中读取数据
需要先引入依赖,使用XStreamMarshaller作为转换器
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-oxm</artifactId>
</dependency>
<dependency>
<groupId>com.thoughtworks.xstream</groupId>
<artifactId>xstream</artifactId>
<version>1.4.11.1</version>
</dependency>
实现数据输入方法
@Bean
public StaxEventItemReader<User> xmlreader() {
StaxEventItemReader reader=new StaxEventItemReader();
reader.setResource(new ClassPathResource("user.xml"));
reader.setFragmentRootElementName("user");//设置根标签,指的是数据标签<user>
XStreamMarshaller xStreamMarshaller=new XStreamMarshaller();//将xml转对象
Map<String,Class> map=new HashMap<>();
map.put("user",User.class);
xStreamMarshaller.setAliases(map);
reader.setUnmarshaller(xStreamMarshaller);//xml的转换器
return reader;
}
多文件读取
@Bean // 多文件读取
public MultiResourceItemReader<User> multFileReader(){ //聚合的读取器
MultiResourceItemReader<User> reader=new MultiResourceItemReader();
reader.setDelegate(fileReader()); //指定单个文件的读取方法
reader.setResources(resources); //指定资源,需要被读取的文件
return reader;
}
@Bean
@StepScope // 指定单个文件读取方法
public FlatFileItemReader<User> fileReader() {
FlatFileItemReader reader=new FlatFileItemReader();
// reader.setResource(new ClassPathResource("user.txt"));//设置要读取的文件位置
reader.setLinesToSkip(1);//跳过第一行,这里第一行是表头
//数据解析
DelimitedLineTokenizer tokenizer=new DelimitedLineTokenizer();
tokenizer.setNames(new String[]{"id","name"});//按照位置与文件中的字段对应
// tokenizer.setDelimiter(",");//设置分割字符串,默认是英文逗号,可以设置其它字符
//把解析的数据映射为对象
DefaultLineMapper<User> mapper=new DefaultLineMapper<>();
mapper.setLineTokenizer(tokenizer);
mapper.setFieldSetMapper(new FieldSetMapper<User>() {
@Override
public User mapFieldSet(FieldSet fieldSet) throws BindException {
User user=new User();
user.setId(fieldSet.readInt("id"));
user.setName(fieldSet.readString("name"));
return user;
}
});
mapper.afterPropertiesSet();
reader.setLineMapper(mapper);//行转换器
return reader;
}
数据写入
SpringBatch中writer是一个关键的组件,用于将处理过的数据写入目标存储,例如数据库,文件,消息队列等。
JdbcBatchItemWriter
用于将数据写入关系型数据库,通常与JDBC模块一起使用。
@Bean
public JdbcBatchItemWriter<DataObject> jdbcWriter(DataSource dataSource) {
return new JdbcBatchItemWriterBuilder<DataObject>()
.dataSource(dataSource)
.sql("INSERT INTO your_table (column1, column2) VALUES (:value1, :value2)")
.beanMapped()
.build();
}
JpaItemWriter
用于将数据写入JPA存储,如Hibernate,它可以将数据存储到关系型数据库中。
@Bean
public JpaItemWriter<DataObject> jpaWriter(EntityManagerFactory entityManagerFactory) {
JpaItemWriter<DataObject> writer = new JpaItemWriter<>();
writer.setEntityManagerFactory(entityManagerFactory);
return writer;
}
FlatFileItemWriter
用于将数据写入文本文件,例如CSV,XML等。你可以指定文件的格式、位置。
@Bean
public FlatFileItemWriter<DataObject> fileWriter() {
return new FlatFileItemWriterBuilder<DataObject>()
.name("fileWriter")
.resource(new FileSystemResource("output-data.csv"))
.delimited()
.delimiter(",")
.names(new String[]{"column1", "column2"})
.build();
}
CompositeItemWriter
可以将数据同时写入多个目标,例如数据库和文件。
@Bean
public CompositeItemWriter<DataObject> compositeWriter(JdbcBatchItemWriter<DataObject> jdbcWriter, FlatFileItemWriter<DataObject> fileWriter) {
CompositeItemWriter<DataObject> writer = new CompositeItemWriter<>();
writer.setDelegates(Arrays.asList(jdbcWriter, fileWriter));
return writer;
}
CustomItemWriter自定义写入器
public class CustomItemWriter implements ItemWriter<DataObject> {
@Override
public void write(List<? extends DataObject> items) throws Exception {
// 自定义写入逻辑
}
}
配套组件
校验功能
启动前校验
在进行批处理任务时应该确保输入数据的正确性和读写操作的有效性,通过在批处理启动钱进行校验,可以大大提高数据准确性。
@Configuration
public class JobValidateListener {
@Autowired
private Validator validator;
@Autowired
private Job job;
@PostConstruct
public void init() {
JobValidationListener validationListener = new JobValidationListener();
validationListener.setValidator(validator);
job.registerJobExecutionListener(validationListener);
}
}
public class JobValidationListener implements JobExecutionListener {
private Validator validator;
public void setValidator(Validator validator) {
this.validator = validator;
}
@Override
public void beforeJob(JobExecution jobExecution) {
JobParameters parameters = jobExecution.getJobParameters();
// 自定义校验器
BatchJobParameterValidator validator = new BatchJobParameterValidator(parameters);
validator.validate();
}
@Override
public void afterJob(JobExecution jobExecution) {
// 启动结束后要执行的代码
}
}
读写校验
在进行批处理任务时应该队每次读取和写入的数据进行校验,以避免不合法的数据写入目标数据存储。
@Bean
public ItemReader<Data> reader() {
JpaPagingItemReader<Data> reader = new JpaPagingItemReader<>();
reader.setEntityManagerFactory(entityManagerFactory);
reader.setPageSize(1000);
reader.setQueryString(FIND_DATA_BY_NAME_AND_AGE);
Map<String, Object> parameters = new HashMap<>();
parameters.put("name", "test");
parameters.put("age", 20);
reader.setParameterValues(parameters);
// 通过设置setValidationQuery方法进行校验操作
reader.setValidationQuery方法进行校验操作("select count(*) from data where name=#{name} and age=#{age}");
return reader;
}
监控功能
使用Springboot Actuator进行监控
在进行批处理任务时应该及时了解任务的执行情况和运行状态,可以使用SpringBoot Actuator进行监控。Spring Boot Actuator提供了丰富的监控指标和API,可以帮助开发人员实时监控批处理任务的运行状况。
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
通过引用actuator依赖来启动监控功能
在进行批处理任务是可以使用管理控制台来监控任务的执行情况和运行状态,通过在控制台上显示指标和任务日志,可以及时和处理任务中的异常情况。
@Configuration
public class BatchLoggingConfiguration {
@Bean
public BatchConfigurer configurer(DataSource dataSource) {
return new DefaultBatchConfigurer(dataSource) {
@Override
public PlatformTransactionManager getTransactionManager() {
return new ResourcelessTransactionManager();
}
@Override
public JobLauncher getJobLauncher() throws Exception {
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(getJobRepository());
jobLauncher.afterPropertiesSet();
return jobLauncher;
}
@Override
public JobRepository getJobRepository() throws Exception {
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setDataSource(getDataSource());
factory.setTransactionManager(getTransactionManager());
factory.setIsolationLevelForCreate("ISOLATION_DEFAULT");
factory.afterPropertiesSet();
return factory.getObject();
}
};
}
}
优化方式
修改数据源配置
<bean id="dataSource"
class="com.alibaba.druid.pool.DruidDataSource"
init-method="init"
destroy-method="close">
<property name="driverClassName" value="${jdbc.driverClassName}" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.username}" />
<property name="password" value="${jdbc.password}" />
<property name="initialSize" value="${druid.initialSize}" />
<property name="minIdle" value="${druid.minIdle}" />
<property name="maxActive" value="${druid.maxActive}" />
<property name="maxWait" value="${druid.maxWait}" />
<property name="timeBetweenEvictionRunsMillis" value="${druid.timeBetweenEvictionRunsMillis}" />
<property name="minEvictableIdleTimeMillis" value="${druid.minEvictableIdleTimeMillis}" />
<property name="validationQuery" value="${druid.validationQuery}" />
<property name="testWhileIdle" value="${druid.testWhileIdle}" />
<property name="testOnBorrow" value="${druid.testOnBorrow}" />
<property name="testOnReturn" value="${druid.testOnReturn}" />
<property name="poolPreparedStatements" value="${druid.poolPreparedStatements}" />
<property name="maxPoolPreparedStatementPerConnectionSize" value="${druid.maxPoolPreparedStatementPerConnectionSize}" />
<property name="filters" value="${druid.filters}" />
</bean>
优先使用数据库连接池的方式。
分片批处理
采用分片策略来处理大批量数据将批处理任务拆分为多个小任务并发执行,提高处理效率。
@Configuration
public class BatchConfiguration {
@Autowired
private JobBuilderFactory jobBuilderFactory;
@Autowired
private StepBuilderFactory stepBuilderFactory;
@Autowired
private DataSource dataSource;
@Bean
public Job job() {
return jobBuilderFactory.get("job")
.incrementer(new RunIdIncrementer())
.start(step1())
.next(step2())
.build();
}
@Bean
public Step step1() {
return stepBuilderFactory.get("step1")
.<User, User>chunk(10000)
.reader(reader(null))
.processor(processor())
.writer(writer(null))
.taskExecutor(taskExecutor())
.build();
}
@Bean
public Step step2() {
return stepBuilderFactory.get("step2")
.<User, User>chunk(10000)
.reader(reader2(null))
.processor(processor())
.writer(writer2(null))
.taskExecutor(taskExecutor())
.build();
}
@SuppressWarnings({ "unchecked", "rawtypes" })
@Bean
@StepScope
public JdbcCursorItemReader<User> reader(@Value("#{stepExecutionContext['fromId']}")Long fromId) {
JdbcCursorItemReader<User> reader = new JdbcCursorItemReader<>();
reader.setDataSource(dataSource);
reader.setSql("SELECT * FROM user WHERE id > ? AND id <= ?");
reader.setPreparedStatementSetter(new PreparedStatementSetter() {
@Override
public void setValues(PreparedStatement ps) throws SQLException {
ps.setLong(1, fromId);
ps.setLong(2, fromId + 10000);
}
});
reader.setRowMapper(new BeanPropertyRowMapper<>(User.class));
return reader;
}
@SuppressWarnings({ "rawtypes", "unchecked" })
@Bean
@StepScope
public JdbcCursorItemReader<User> reader2(@Value("#{stepExecutionContext['fromId']}")Long fromId) {
JdbcCursorItemReader<User> reader = new JdbcCursorItemReader<>();
reader.setDataSource(dataSource);
reader.setSql("SELECT * FROM user WHERE id > ?");
reader.setPreparedStatementSetter(new PreparedStatementSetter() {
@Override
public void setValues(PreparedStatement ps) throws SQLException {
ps.setLong(1, fromId + 10000);
}
});
reader.setRowMapper(new BeanPropertyRowMapper<>(User.class));
return reader;
}
@Bean
public ItemProcessor<User, User> processor() {
return new UserItemProcessor();
}
@Bean
public ItemWriter<User> writer(DataSource dataSource) {
JdbcBatchItemWriter<User> writer = new JdbcBatchItemWriter<>();
writer.setDataSource(dataSource);
writer.setSql("INSERT INTO user(name, age) VALUES(?, ?)");
writer.setItemPreparedStatementSetter(new UserPreparedStatementSetter());
return writer;
}
@Bean
public ItemWriter<User> writer2(DataSource dataSource) {
JdbcBatchItemWriter<User> writer = new JdbcBatchItemWriter<>();
writer.setDataSource(dataSource);
writer.setSql("UPDATE user SET age = ? WHERE name = ?");
writer.setItemPreparedStatementSetter(new UserUpdatePreparedStatementSetter());
return writer;
}
@Bean(destroyMethod="shutdown")
public ThreadPoolTaskExecutor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(10);
executor.setMaxPoolSize(20);
executor.setQueueCapacity(30);
executor.initialize();
return executor;
}
@Bean
public StepExecutionListener stepExecutionListener() {
return new StepExecutionListenerSupport() {
@Override
public ExitStatus afterStep(StepExecution stepExecution) {
if(stepExecution.getSkipCount() > 0) {
return new ExitStatus("COMPLETED_WITH_SKIPS");
} else {
return ExitStatus.COMPLETED;
}
}
};
}
}
使用监控和异常处理策略
@Configuration
public class BatchConfiguration {
...
@Bean
public Step step1() {
return stepBuilderFactory.get("step1")
.<User, User>chunk(10000)
.reader(reader(null))
.processor(processor())
.writer(writer(null))
.taskExecutor(taskExecutor())
.faultTolerant() // 容错处理策略
.skipPolicy(new UserSkipPolicy()) // 跳过错误记录的策略
.retryPolicy(new SimpleRetryPolicy()) // 重试策略
.retryLimit(3)
.noRollback(NullPointerException.class) // 不回滚策略
.listener(stepExecutionListener())
.build();
}
@Bean
public StepExecutionListener stepExecutionListener() {
return new StepExecutionListenerSupport() {
@Override
public ExitStatus afterStep(StepExecution stepExecution) {
if(stepExecution.getSkipCount() > 0) {
return new ExitStatus("COMPLETED_WITH_SKIPS");
} else {
return ExitStatus.COMPLETED;
}
}
};
}
@Bean
public SkipPolicy userSkipPolicy() {
return (Throwable t, int skipCount) -> {
if(t instanceof NullPointerException) {
return false;
} else {
return true;
}
};
}
@Bean
public RetryPolicy simpleRetryPolicy() {
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(3);
return retryPolicy;
}
@Bean
public ItemWriter<User> writer(DataSource dataSource) {
CompositeItemWriter<User> writer = new CompositeItemWriter<>();
List<ItemWriter<? super User>> writers = new ArrayList<>();
writers.add(new UserItemWriter());
writers.add(new LogUserItemWriter());
writer.setDelegates(writers);
writer.afterPropertiesSet();
return writer;
}
public class UserItemWriter implements ItemWriter<User> {
@Override
public void write(List<? extends User> items) throws Exception {
for(User item : items) {
...
}
}
}
public class LogUserItemWriter implements ItemWriter<User> {
@Override
public void write(List<? extends User> items) throws Exception {
for(User item : items) {
...
}
}
@Override
public void onWriteError(Exception exception, List<? extends User> items) {
...
}
}
@Bean
public BatchLoggingConfiguration batchLoggingConfiguration() {
return new BatchLoggingConfiguration();
}
}