sequential disk access can in some cases be faster than random memory access!
- The memory overhead of objects is very high, often doubling the size of the data stored (or worse).
- Java garbage collection becomes increasingly fiddly and slow as the in-heap data increases.
All data is immediately written to a persistent log on the filesystem without necessarily flushing to disk. In effect this just means that it is transferred into the kernel's pagecache.
kafka use poll instead of push
To avoid this we have parameters in our pull request that allow the consumer request to block in a "long poll" waiting until data arrives
(Kafka's persistent storage makes me feel like the same as Git ... :-))
So effectively Kafka guarantees at-least-once delivery by default and allows the user to implement at most once delivery by disabling retries on the producer and committing its offset prior to processing a batch of messages. Exactly-once delivery requires co-operation with the destination storage system but Kafka provides the offset which makes implementing this straight-forward.