[CF 431D] Random Task

洛谷题解:二进制位计数

题目

洛谷

题意

求任意 n n n,满足 n + 1 n+1 n+1 2 n 2n 2n 中有 m m m 个数二进制位有 k k k 1 1 1

题解

首先,有 k k k 1 1 1 的数字个数,对于 n n n 是单调的:考虑一开始为 n n n,则序列为 n + 1 , n + 2 + . . . + 2 n n+1,n+2+...+2n n+1,n+2+...+2n,若为 n + 1 n+1 n+1,则为 n + 2 , n + 3 + . . . + 2 n + 2 n+2,n+3+...+2n+2 n+2,n+3+...+2n+2。变化为:

  • 少了 n + 1 n+1 n+1
  • 多了 2 n + 1 、 2 n + 2 2n+1、2n+2 2n+12n+2

2 n + 2 2n+2 2n+2 n + 1 n+1 n+1 1 1 1 的个数相同,那么只需要看多出来的 2 n + 1 2n+1 2n+1。而无论 2 n + 1 2n+1 2n+1 是否有 k k k 1 1 1,答案至少是不降的。

因此,我们选择二分这个 n n n

对于一个 n n n,如何得到它的序列里有多少个满足条件的数呢?这里直接用组合数算就行了,详见代码(这句话似乎在Fart)。

LL C[MAXM][MAXM];
inline Prepare()
{
	C[0][0] = 1;
	for (Int i = 1; i <= 64; ++ i)
	{
		C[i][0] = 1;
		for (Int j = 1; j <= i; ++ j)
			C[i][j] = C[i - 1][j] + C[i - 1][j - 1];
	}
}
LL m, k;
inline LL Solve(LL x)
{
	LL Res = 0, Meet = 0;
	for (Int i = 63; i >= 0; -- i)
		if (x & (1ll << i))
		{
			if (Meet <= k)
				Res += C[i][k - Meet];
			Meet ++;
		}
	return Res;
}
int main()
{
	Prepare();
	read( m ); read( k );
	LL l = 1, r = 1000000000000000000ll;
	while (1 + 1 == 2)
	{
//		printf("%lld %lld\n", l, r);
		LL Mid = (l + r) / 2;
		LL Get = Solve(2 * Mid) - Solve( Mid );
//		printf("%lld %lld\n", Solve(2 * Mid), Solve( Mid ));
		if (Get > m)
			r = Mid - 1;
		else if (Get == m)
			return ! printf("%lld", Mid);
		else l = Mid + 1;
	}
	return 0;
}
2025-09-25 10:12:59.478 INFO 31516 — [ main] c.t.n.demo.basicspringboot.KafkaDemoApp : Starting KafkaDemoApp using Java 1.8.0_462-462 on 18088363-BG with PID 31516 (D:\r\idmdemo\target\classes started by admin in D:\r\idmdemo) 2025-09-25 10:12:59.480 INFO 31516 — [ main] c.t.n.demo.basicspringboot.KafkaDemoApp : No active profile set, falling back to 1 default profile: “default” 2025-09-25 10:13:00.103 INFO 31516 — [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) 2025-09-25 10:13:00.110 INFO 31516 — [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2025-09-25 10:13:00.110 INFO 31516 — [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.71] 2025-09-25 10:13:00.170 INFO 31516 — [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2025-09-25 10:13:00.170 INFO 31516 — [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 668 ms 2025-09-25 10:13:00.594 INFO 31516 — [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 1 endpoint(s) beneath base path ‘/actuator’ 2025-09-25 10:13:00.625 INFO 31516 — [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ‘’ 2025-09-25 10:13:00.633 INFO 31516 — [ main] c.t.n.demo.basicspringboot.KafkaDemoApp : Started KafkaDemoApp in 1.371 seconds (JVM running for 1.656) 2025-09-25 10:13:00.648 INFO 31516 — [ main] c.t.s.e.port.kafka.KafkaEventCenter : start to register topic: hello-topic, groupId: demo-group 2025-09-25 10:13:00.654 INFO 31516 — [ main] c.t.s.e.port.kafka.KafkaEventCenter : start to register topic: vms_dlq_hello-topic, groupId: dlq-demo-group 2025-09-25 10:13:00.665 INFO 31516 — [_dlq-demo-group] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [192.168.203.128:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = dlq-demo-group group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 200 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = -1 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 2025-09-25 10:13:00.665 INFO 31516 — [opic_demo-group] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [192.168.203.128:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = demo-group group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 200 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = -1 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 2025-09-25 10:13:00.678 INFO 31516 — [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values: acks = -1 batch.size = 4096 bootstrap.servers = [192.168.203.128:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = producer-1 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] internal.auto.downgrade.txn.commit = false key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 1 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer 2025-09-25 10:13:00.700 INFO 31516 — [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.8.0 2025-09-25 10:13:00.700 INFO 31516 — [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: ebb1d6e21cc92130 2025-09-25 10:13:00.700 INFO 31516 — [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1758766380699 2025-09-25 10:13:00.711 INFO 31516 — [opic_demo-group] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.8.0 2025-09-25 10:13:00.711 INFO 31516 — [opic_demo-group] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: ebb1d6e21cc92130 2025-09-25 10:13:00.711 INFO 31516 — [opic_demo-group] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1758766380711 2025-09-25 10:13:00.711 INFO 31516 — [opic_demo-group] c.t.s.e.p.k.consumer.KafkaConsumerTask : start to consumer kafka topic: hello-topic 2025-09-25 10:13:00.711 INFO 31516 — [opic_demo-group] c.t.s.e.p.k.c.AbstractTaskService : KafkaConsumerTask is running! topic:hello-topic 2025-09-25 10:13:00.711 INFO 31516 — [_dlq-demo-group] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.8.0 2025-09-25 10:13:00.711 INFO 31516 — [_dlq-demo-group] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: ebb1d6e21cc92130 2025-09-25 10:13:00.711 INFO 31516 — [_dlq-demo-group] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1758766380711 2025-09-25 10:13:00.711 INFO 31516 — [_dlq-demo-group] c.t.s.e.p.k.consumer.KafkaConsumerTask : start to consumer kafka topic: vms_dlq_hello-topic 2025-09-25 10:13:00.711 INFO 31516 — [_dlq-demo-group] c.t.s.e.p.k.c.AbstractTaskService : KafkaConsumerTask is running! topic:vms_dlq_hello-topic 2025-09-25 10:13:00.711 INFO 31516 — [_dlq-demo-group] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299, groupId=dlq-demo-group] Subscribed to topic(s): vms_dlq_hello-topic 2025-09-25 10:13:00.711 INFO 31516 — [opic_demo-group] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65, groupId=demo-group] Subscribed to topic(s): hello-topic 2025-09-25 10:13:00.863 INFO 31516 — [ad | producer-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-1] Cluster ID: ZzP7spDrRwuJk27muhJ29g 2025-09-25 10:13:00.863 INFO 31516 — [opic_demo-group] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65, groupId=demo-group] Cluster ID: ZzP7spDrRwuJk27muhJ29g 2025-09-25 10:13:00.863 INFO 31516 — [_dlq-demo-group] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299, groupId=dlq-demo-group] Cluster ID: ZzP7spDrRwuJk27muhJ29g 2025-09-25 10:13:00.863 INFO 31516 — [_dlq-demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299, groupId=dlq-demo-group] Discovered group coordinator admin1-virtual-machine:9092 (id: 2147483647 rack: null) 2025-09-25 10:13:00.863 INFO 31516 — [opic_demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65, groupId=demo-group] Discovered group coordinator admin1-virtual-machine:9092 (id: 2147483647 rack: null) 2025-09-25 10:13:01.300 INFO 31516 — [_dlq-demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299, groupId=dlq-demo-group] (Re-)joining group 2025-09-25 10:13:01.300 INFO 31516 — [opic_demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65, groupId=demo-group] (Re-)joining group 2025-09-25 10:13:01.320 INFO 31516 — [opic_demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65, groupId=demo-group] (Re-)joining group 2025-09-25 10:13:01.320 INFO 31516 — [_dlq-demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299, groupId=dlq-demo-group] (Re-)joining group 2025-09-25 10:13:01.323 INFO 31516 — [_dlq-demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299, groupId=dlq-demo-group] Successfully joined group with generation Generation{generationId=30, memberId=‘consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299-67bf5860-c183-4894-acbb-8837abf647f6’, protocol=‘cooperative-sticky’} 2025-09-25 10:13:01.323 INFO 31516 — [opic_demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65, groupId=demo-group] Successfully joined group with generation Generation{generationId=31, memberId=‘consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65-c3379cf9-80c6-40cc-a872-71b04e2f7b27’, protocol=‘cooperative-sticky’} 2025-09-25 10:13:01.324 INFO 31516 — [_dlq-demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299, groupId=dlq-demo-group] Finished assignment for group at generation 30: {consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299-67bf5860-c183-4894-acbb-8837abf647f6=Assignment(partitions=[vms_dlq_hello-topic-0])} 2025-09-25 10:13:01.324 INFO 31516 — [opic_demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65, groupId=demo-group] Finished assignment for group at generation 31: {consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65-c3379cf9-80c6-40cc-a872-71b04e2f7b27=Assignment(partitions=[hello-topic-0])} 2025-09-25 10:13:01.327 INFO 31516 — [opic_demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65, groupId=demo-group] Successfully synced group in generation Generation{generationId=31, memberId=‘consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65-c3379cf9-80c6-40cc-a872-71b04e2f7b27’, protocol=‘cooperative-sticky’} 2025-09-25 10:13:01.327 INFO 31516 — [opic_demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65, groupId=demo-group] Updating assignment with Assigned partitions: [hello-topic-0] Current owned partitions: [] Added partitions (assigned - owned): [hello-topic-0] Revoked partitions (owned - assigned): [] 2025-09-25 10:13:01.327 INFO 31516 — [opic_demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65, groupId=demo-group] Notifying assignor about the new Assignment(partitions=[hello-topic-0]) 2025-09-25 10:13:01.328 INFO 31516 — [_dlq-demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299, groupId=dlq-demo-group] Successfully synced group in generation Generation{generationId=30, memberId=‘consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299-67bf5860-c183-4894-acbb-8837abf647f6’, protocol=‘cooperative-sticky’} 2025-09-25 10:13:01.328 INFO 31516 — [_dlq-demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299, groupId=dlq-demo-group] Updating assignment with Assigned partitions: [vms_dlq_hello-topic-0] Current owned partitions: [] Added partitions (assigned - owned): [vms_dlq_hello-topic-0] Revoked partitions (owned - assigned): [] 2025-09-25 10:13:01.328 INFO 31516 — [_dlq-demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299, groupId=dlq-demo-group] Notifying assignor about the new Assignment(partitions=[vms_dlq_hello-topic-0]) 2025-09-25 10:13:01.328 INFO 31516 — [opic_demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65, groupId=demo-group] Adding newly assigned partitions: hello-topic-0 2025-09-25 10:13:01.328 INFO 31516 — [_dlq-demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299, groupId=dlq-demo-group] Adding newly assigned partitions: vms_dlq_hello-topic-0 2025-09-25 10:13:01.328 INFO 31516 — [opic_demo-group] com.tplink.smb.eventcenter.api.Handler : ending rebalance! 2025-09-25 10:13:01.328 INFO 31516 — [_dlq-demo-group] com.tplink.smb.eventcenter.api.Handler : ending rebalance! 2025-09-25 10:13:01.333 INFO 31516 — [opic_demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_84858b4f-b0ce-4324-90e8-3ceb81cfcf65, groupId=demo-group] Setting offset for partition hello-topic-0 to the committed offset FetchPosition{offset=17, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[admin1-virtual-machine:9092 (id: 0 rack: null)], epoch=absent}} 2025-09-25 10:13:01.333 INFO 31516 — [_dlq-demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_c7bcb7ca-7123-40ce-99e3-8228f4657299, groupId=dlq-demo-group] Setting offset for partition vms_dlq_hello-topic-0 to the committed offset FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[admin1-virtual-machine:9092 (id: 0 rack: null)], epoch=absent}} 尝试处理消息: Hello Kafka! 这是一条会触发死信的消息 2025-09-25 10:13:01.371 WARN 31516 — [nPool-worker-22] c.t.s.e.p.k.d.DLQEventHandlerWrapper : Event handle failed (attempt 1/2): 模拟业务处理失败 2025-09-25 10:13:01.371 INFO 31516 — [nPool-worker-22] c.t.s.e.p.k.d.DLQEventHandlerWrapper : Scheduled retry delay: delay=1000ms 尝试处理消息: Hello Kafka! 这是一条会触发死信的消息 2025-09-25 10:13:02.379 WARN 31516 — [nPool-worker-22] c.t.s.e.p.k.d.DLQEventHandlerWrapper : Event handle failed (attempt 2/2): 模拟业务处理失败 2025-09-25 10:13:02.381 INFO 31516 — [nPool-worker-22] c.t.s.e.port.kafka.KafkaEventCenter : not implement yet 2025-09-25 10:13:02.382 INFO 31516 — [nPool-worker-22] c.t.s.e.p.k.d.DLQEventHandlerWrapper : Eventsent to DLQ topic: vms_dlq_hello-topic 2025-09-25 10:13:02.382 ERROR 31516 — [nPool-worker-22] c.t.s.e.p.k.d.DLQEventHandlerWrapper : Event failed after 2 retries (sent to DLQ)请根据这段日志分析,死信队列如果想获取offset和目标topic应该在哪里侵入
最新发布
09-26
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值