You’ve just set up your Kafka cluster and now you are ready to process tens of thousands events per second. You decoupled your architecture and now all the communication goes via pubSub bus and you can focus only on providing business value. It would be great if that can be true. In real life you need to do a lot of tweaks to have your backbone ready to handle all the traffic you want.
21. Lost events
ERROR [Replica Manager on Broker 2]: Error when processing
fetch request for partition [test,1] offset 10000 from consumer
with correlation id 0. Possible cause:
Request for offset 10000 but we only have log segments in the
range 8000 to 9000. (kafka.server.ReplicaManager)
38. Optimize message size
JSON
Snappy
ERROR Error when sending message to topic t3 with key: 4 bytes, value: 100
bytes with error: The server experienced an unexpected error when
processing the request (org.apache.kafka.clients.producer.internals.
ErrorLoggingCallback)
java: target/snappy-1.1.1/snappy.cc:423: char* snappy::internal::
CompressFragment(const char*, size_t, char*, snappy::uint16*, int): Assertion
`0 == memcmp(base, candidate, matched)' failed.
errors on publishing large amount of
messages
47. Improved security
Authentication and authorization interfaces provided
By Default:
You can create any topic in your group
You can publish everywhere (in progress)
Group owner defines subscriptions