java指数退避重试_重试和指数退避的承诺

java指数退避重试

When you don’t have an interface for knowing when a remote resource is available, an option to consider is to use exponential backoff rather than to poll that resource until you get a response.

当您没有知道远程资源何时可用的界面时,可以考虑的选择是使用指数退避而不是轮询该资源,直到获得响应为止。

建立 (Set Up)

In this scenario, let’s mimic the behaviour of a browser and a server. Let’s say the server has an abysmal failure rate of 80%.

在这种情况下,让我们模仿浏览器和服务器的行为。 假设服务器的故障率高达80%。

const randomlyFail = (resolve, reject) =>
Math.random() < 0.8 ? reject() : resolve();const apiCall = () =>
new Promise((...args) => setTimeout(() => randomlyFail(...args), 1000));

The apiCall function mimics the behaviour of calling an endpoint on a server.

apiCall函数模仿在服务器上调用端点的行为。

重试 (Retrying)

When the apiCall is rejected, getResource is called again immediately.

拒绝apiCall ,立即再次调用getResource

const getResource = () => apiCall().catch(getResource);

延迟地 (With a Delay)

If a server is already failing, it may not be wise to overwhelm it with requests. Adding a delay to the system can be a provisional solution to the problem. If possible, it would be better to investigate the cause of the server failing.

如果服务器已经发生故障,则用请求淹没服务器可能不是明智的选择。 给系统增加延迟可能是该问题的临时解决方案。 如果可能,最好调查服务器故障的原因。

const delay = () => new Promise(resolve => setTimeout(resolve, 1000));const getResource = () =>
apiCall().catch(() => delay().then(() => getResource()));

指数退避 (Exponential Backoff)

The severity of a server’s failure is sometimes unknown. A server could have had a temporary error or could be offline completely. It can be beneficial to increase the retry delay with every attempt.

服务器故障的严重性有时未知。 服务器可能出现临时错误,或者可能完全脱机。 每次尝试都增加重试延迟可能是有益的。

The retry count is passed to the delay function and is used to set the setTimeout delay.

重试计数传递给延迟函数,并用于设置setTimeout延迟。

const delay = retryCount =>
new Promise(resolve => setTimeout(resolve, 10 ** retryCount));const getResource = (retryCount = 0) =>
apiCall().catch(() => delay(retryCount).then(() => getResource(retryCount + 1)));

In this case 10^n was used as the timeout where n is the retry count. In other words the first 5 retries will have the following delays: [1 ms, 10 ms, 100 ms, 1 s, 10 s].

在这种情况下,将10 ^ n用作超时,其中n是重试计数。 换句话说,前5次重试将具有以下延迟: [1 ms,10 ms,100 ms,1 s,10 s]

使用async/await并添加重试限制 (Using async/await and Adding a Retry Limit)

The above can also be written in a more intelligible form using async/await. A retry limit was also added to restrict the maximum wait time for the getResource function. Since we now have a retry limit, we should have a way to propagate the error, so a lastError parameter was added.

上面的代码也可以使用async/await以更易懂的形式编写。 还添加了一个重试限制,以限制getResource函数的最大等待时间。 由于现在有了重试限制,因此我们应该有一种传播错误的方法,因此添加了lastError参数。

const getResource = async (retryCount = 0, lastError = null) => {
if (retryCount > 5) throw new Error(lastError);
try {
return apiCall();
} catch (e) {
await delay(retryCount);
return getResource(retryCount + 1, e);
}
};

结论 (Conclusion)

Exponential backoff is a useful technique in dealing with unpredictable API responses. Each project will have its own set of requirements and constraints, and there may be circumstances where there are better solutions. My hope is that, if exponential backoff is the right solution, you'll know how to implement it.

指数补偿是处理不可预测的API响应的有用技术。 每个项目都会有自己的一套要求和约束,并且在某些情况下可能会有更好的解决方案。 我的希望是,如果指数补偿是正确的解决方案,您将知道如何实现它。

Originally from:

最初是从:

翻译自: https://medium.com/swlh/retrying-and-exponential-backoff-with-promises-1486d3c259

java指数退避重试

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
在 Flink 1.15 中使用 Scala 将数据写入 Cassandra 数据库时,可以使用指数退避重试机制来提高数据写入的可靠性。具体实现步骤如下: 1. 引入依赖和配置 在项目的 build.sbt 文件中添加以下依赖: ``` libraryDependencies += "com.datastax.cassandra" % "cassandra-driver-core" % "4.13.0" ``` 在程序中添加以下配置: ```scala import com.datastax.oss.driver.api.core.config.DefaultDriverOption import com.datastax.oss.driver.api.core.config.DriverConfigLoader import com.datastax.oss.driver.api.core.config.TypedDriverOption val config = DriverConfigLoader.fromClasspath("application.conf") .build.withString(DefaultDriverOption.LOAD_BALANCING_POLICY_CLASS, "RoundRobinPolicy") .withString(DefaultDriverOption.RETRY_POLICY_CLASS, "com.datastax.oss.driver.api.core.retry.ExponentialBackoffRetryPolicy") .withInt(TypedDriverOption.RETRY_POLICY_EXONENTIAL_BASE_DELAY, 1000) .withInt(TypedDriverOption.RETRY_POLICY_MAX_ATTEMPTS, 5) ``` 2. 创建 Cassandra 连接 使用上一步中的配置创建 Cassandra 连接: ```scala import com.datastax.oss.driver.api.core.CqlSession val session: CqlSession = CqlSession.builder().withConfigLoader(config).build() ``` 3. 定义 Cassandra Sink 定义一个 Cassandra Sink,使用指数退避重试机制: ```scala import org.apache.flink.streaming.api.functions.sink.SinkFunction import org.apache.flink.streaming.connectors.cassandra.CassandraSinkBuilder import org.apache.flink.streaming.connectors.cassandra.CassandraSinkOptions import org.apache.flink.streaming.connectors.cassandra.ClusterBuilder import com.datastax.oss.driver.api.core.CqlSession import com.datastax.oss.driver.api.core.cql.{BoundStatement, PreparedStatement, SimpleStatement} import com.datastax.oss.driver.api.core.retry.RetryPolicy case class MyData(id: Int, name: String) class CassandraSink(session: CqlSession) extends SinkFunction[MyData] { override def invoke(data: MyData): Unit = { val statement: BoundStatement = session.prepare("INSERT INTO mytable (id, name) VALUES (?, ?)").bind(data.id.asInstanceOf[AnyRef], data.name) session.execute(statement) } } val cassandraSink: CassandraSink = new CassandraSink(session) ``` 4. 使用 Cassandra Sink 将数据写入 Cassandra 数据库时,使用上一步中定义的 Cassandra Sink,并启用重试机制: ```scala val dataStream: DataStream[MyData] = ??? CassandraSinkBuilder .builder() .withSession(session) .withPreparedStatementSetter((data: MyData, statement: PreparedStatement) => statement.bind(data.id.asInstanceOf[AnyRef], data.name)) .withRetryPolicy(RetryPolicy.defaultExponentialBackoff()) .withMaxConcurrentRequests(2) .withMaxPendingRequests(10) .withCassandraOptions(new CassandraSinkOptions()) .build() .addSink(dataStream) ``` 在上述代码中,withRetryPolicy 方法启用了指数退避重试机制,并使用了默认的指数退避重试策略。withMaxConcurrentRequests 和 withMaxPendingRequests 方法可以控制并发请求和等待请求的最大数量。 以上就是在 Flink 1.15 中使用 Scala 实现 Cassandra Sink 指数退避重试机制的方法。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值