Kafka for JUnit enables developers to start and stop a complete Kafka cluster comprised of Kafka brokers and distributed Kafka Connect workers from within a JUnit test. It also provides a rich set of convenient accessors to interact with such an embedded or external Kafka cluster in a lean and non-obtrusive way.
Kafka for JUnit can be used to both whitebox-test individual Kafka-based components of your application or to blackbox-test applications that offer an incoming and/or outgoing Kafka-based interface.
This presentation gives a brief introduction into Kafka for JUnit, discussing its design principles and code examples to get developers quickly up to speed using the library.
2. 2
There are several approaches to testing Kafka-enabled
components or services.
Problem
Solution ▪ Spring Kafka provides an embeddable cluster
▪ Lacks the possiblity to work with external clusters
▪ Leads to boilerplate code in your test
▪ There are (costly) testing tools that are able to inject records into topics
▪ Lacks the possibility to work with embedded clusters
▪ Suitable for system testing, not integration testing
▪ Testing Kafka-based components or services is not trivial
▪ Need to have a working cluster
▪ Must be able to instrument the cluster
▪ Mocking is not an option for integration or system tests
3. 3
Kafka for JUnit is suitable for integration testing as well as
system testing.
Solution ▪ Kafka for JUnit
▪ Works with embedded Kafka clusters – suitable for integration testing
▪ Works with external Kafka clusters – suitable for system testing
▪ Features a concise and readable API
▪ Features fault injection to trigger error handling
▪ Integrates well with Spring Kafka
4. 4
Kafka for JUnit allows you to write integration tests that work
against an embedded Kafka cluster.
Integration Test
JUNIT JUPITER
Consumer / Producer
KAFKA-BASED COMPONENT
my.test.topic
EMBEDDED APACHE KAFKA CLUSTER
VERTICALS
instruments
provisions
interacts
with
5. 6
Kafka for JUnit allows you to write system tests that work against
an external Kafka cluster.
VERTICALS
Event Producer
SERVICE A
Event Consumer
SERVICE B
topic.for.events
APACHE KAFKA CLUSTER
System Test
JUNIT JUPITER
trigger use case
observes
topic
for
expected
records
publishes
events
consumes
events
6. 7
Kafka for JUnit provides abstractions for interacting with a Kafka cluster
through EmbeddedKafkaCluster and ExternalKafkaCluster.
my.test.topic
APACHE KAFKA CLUSTER
RecordProducer
EmbeddedKafkaCluster
RecordConsumer
TopicManager
FaultInjection
my.test.topic
APACHE KAFKA CLUSTER
RecordProducer
ExternalKafkaCluster
RecordConsumer
TopicManager
7. 8
A RecordProducer provides the means to send key-value pairs
or non-keyed values to a Kafka topic.
public interface RecordProducer {
<V> List<RecordMetadata> send(SendValues<V> sendRequest) throws ...;
<V> List<RecordMetadata> send(SendValuesTransactional<V> sendRequest) throws ...;
<K,V> List<RecordMetadata> send(SendKeyValues<K,V> sendRequest) throws ...;
<K,V> List<RecordMetadata> send(SendKeyValuesTransactional<K,V> sendRequest) ...;
/* overloaded methods that accept builder instances
* for the resp. type have been omitted for brevity
*/
}
Kafka for JUnit provides builders
for these requests!
Interface definition of RecordProducer
8. 9
Publishing data to a Kafka topic is as simple as contributing a
one-liner in the default case.
kafka.send(SendValues.to(“my.test.topic“, “a“, “b“, “c“));
Sending non-keyed values using defaults
kafka.send(SendValuesTransactional.inTransaction(
“my.test.topic“,
Arrays.asList(“a“, “b“, “c“)));
kafka.send(SendValues.to(“my.test.topic“, “a“, “b“, “c“))
.with(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, “true“)
.with(ProducerConfig.MAX_IN_FLIGHT_REQUEST_PER_CONNECTION, “1“));
Sending non-keyed values using overrides
Sending non-keyed values transactionally
9. 1
A RecordConsumer provides the means to read data from a topic or
observe a topic until some criteria is met or a timeout elapses.
public interface RecordConsumer {
<V> List<V> readValues(ReadKeyValues<String,V> readRequest) throws ...;
<K,V> List<KeyValue<K,V>> read(ReadKeyValues<K,V> readRequest) throws ...;
<V> List<V> observeValues(ObserveKeyValues<String,V> observeRequest) throws ...;
<K,V> List<KeyValue<K,V>> observe(ObserveKeyValues<K,V> observeRequest) throws ...;
/* overloaded methods that accept builder instances
* for the resp. type have been omitted for brevity
*/
}
Kafka for JUnit provides builders
for these requests!
Interface definition of RecordConsumer
10. 1
Consuming records is just as easy as producing them.
val values = kafka.readValues(ReadKeyValues.from(“my.test.topic“));
Consuming only values using defaults
List<KeyValue<String, Long>> records = kafka.read(ReadKeyValues
.from(“my.test.topic“, Long.class)
.with(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ...));
val records = kafka.read(ReadKeyValues.from(“my.test.topic“));
Consuming key-value pairs using defaults
Consuming key-value pairs using overrides
11. 1
Observations can be used to let a test fail unless given criteria
are met.
kafka.observeValues(ObserveKeyValues.on(“my.test.topic“, 3));
Observing a topic until n values have been consumed
val keyFilter = Integer.parseInt(k) % 2 == 0;
kafka.observe(ObserveKeyValues.on(“my.test.topic“, 3, Integer.class)
.with(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ...)
.filterOnKeys(keyFilter));
List<KeyValue<String,String>> observedRecords = kafka
.observe(ObserveKeyValues.on(“my.test.topic“, 3));
Observing a topic until n records have been consumed
Using filters when consuming or observing a topic
12. 1
A TopicManager provides the means to manage Kafka
topics.
public interface TopicManager {
void createTopic(TopicConfig config);
void deleteTopic(String topic);
boolean exists(String topic);
Map<Integer, LeaderAndIsr> fetchLeaderAndIsr(String topic);
Properties fetchTopicConfig(String topic);
/* overloaded methods that accept builder instances for the resp. type have been
omitted for brevity */
}
Kafka for JUnit provides builders
for these requests!
Interface definition of TopicManager
13. 1
The default TopicManager implementation leverages the
AdminClient implementation of the Kafka Client library.
kafka.createTopic(TopicConfig.withName(“my.test.topic“));
Creating a topic using defaults
kafka.deleteTopic(“my.test.topic“);
Deleting a topic
kafka.createTopic(TopicConfig.withName(“my.test.topic“)
.withNumberOfPartitions(5)
.withNumberOfReplicas(1));
Creating a topic with specific properties
15. 1
Let’s write a test that exercises the write-path of Kafka-based
component.
TurbineEventPublisherTest
JUNIT JUPITER
TurbineEventPublisher
TURBINE REGISTRY
turbine.lifecycle.events
EMBEDDED APACHE KAFKA CLUSTER
instruments
provisions
interacts
with
17. 1
Let’s write a simple test that verifies that TurbineEventPublisher
is able to write events to the designated topic.
public class TurbineEventPublisherTest {
private EmbeddedKafkaCluster kafka;
@BeforeEach
void setupKafka() {
kafka = provisionWith(defaultClusterConfig());
kafka.start();
}
@AfterEach
void tearDownKafka() {
kafka.stop();
}
}
Provide the skeleton for the component test incl. a workable Kafka cluster.
static import from
ExternalKafkaCluster
static import from
ExternalKafkaClusterConfig
18. 1
The observe method throws an AssertionError once a
certain configurable amount of time has elapsed.
var config = Map.<String, Object>of(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka.getBrokerList());
var publisher = new TurbineEventPublisher(“turbine.lifecycle.events“,
config);
var event = new TurbineRegisteredEvent(“1a5c6012“, 49.875114, 8.978702);
publisher.log(event);
Create an instance of the subject-under-test and publish test data
19. 2
The observe method throws an AssertionError once a
certain configurable amount of time has elapsed. (cont.)
kafka.observe(ObserveKeyValues
.on(“turbine.lifecycle.events“, 1, TurbineEvent.class)
.observeFor(15, TimeUnit.SECONDS)
.filterOnKeys(key -> key.equals(“1a5c6012“))
.with(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
TurbineEventDeserializer.class));
Observe the designated topic for new data
override arbitrary
consumer properties
the topic we want to
observe
the number of
records we expect
the value type of the
expected records
use filters to add
observation criterias
20. 2
The observe method returns all records that it obtained from
watching the topic.
var record = kafka.observe(
on(“turbine.lifecycle.events“, 1, TurbineEvent.class)
.with(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
TurbineEventDeserializer.class))
.stream()
.findFirst()
.orElseThrow(AssertionError::new);
assertThat(record.getKey()).isEqualTo(“1a5c6012“);
assertThat(record.getValue()).isInstanceOf(TurbineRegisteredEvent.class);
Fetch observed records and assert that their data is what you expect it to be
23. 2
We’ll use a simple client interface to provide the means to exert an
external stimulus on the system. (cont.)
Listing 1: Example for Feign-based HTTP client that interacts with the system
public interface GettingThingsDone {
@RequestLine(“POST /items“)
@Headers(“Content-Type: application/json“)
Response createItem(CreateItem payload);
@RequestLine(“GET /items/{itemId}“)
@Headers(“Accept: application/json“)
Item getItem(@Param(“itemId“) String itemId);
/* additional methods omitted for brevity */
}
Example for Feign-based HTTP client that interacts with the system
24. 2
We’ll use a simple client interface to provide the means to exert an
external stimulus on the system. (cont.)
Listing 1: Example for Feign-based HTTP client that interacts with the system
val kafka = ExternalKafkaCluster.at(“http://localhost:9092“);
Gain programmatic access to the cluster
val itemId = extractItemId(response);
Extract the ID of the newly created item from the response
val gtd = createGettingThingsDoneClient();
val payload = new CreateItem(“Buy groceries!“);
val response = gtd.createItem(payload)
Trigger a use case using the client interface
25. 2
Leverage Kafka for JUnit to observe the designated topic and apply
assertions on the returned records.
Listing 1: Example for Feign-based HTTP client that interacts with the system
List<AvroItemEvent> publishedEvents = kafka
.observeValues(on(“item.events“, 1, AvroItemEvent.class)
.observeFor(10, TimeUnit.SECONDS)
.with(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
ItemEventDeserializer.class)
.filterOnKeys(aggregateId -> aggregateId.equals(itemId)));
Observe the designated topic for the
throws an AssertionError
if the timeout elapses
26. 2
Want to know more?
GitHub
Blog ▪ Günther, M., Writing component tests for Kafka producers,
https://bit.ly/39NpoCU
▪ Günther, M., Writing component tests for Kafka consumers,
https://bit.ly/36KrXoV
▪ Günther, M., Writing system tests for a Kafka-enabled microservice,
https://bit.ly/2OUeEMs
▪ Günther, M., Using Kafka for JUnit with Spring Kafka,
https://bit.ly/3c61WSx
▪ Kafka for JUnit on GitHub,
https://mguenther.github.io/kafka-junit/
▪ User Guide to Kafka for JUnit,
https://mguenther.github.io/kafka-junit/