We earn commission when you buy through affiliate links.

This does not influence our reviews or recommendations.Learn more.

Millions of data records are being generated every single day in todays computing systems.

A person working on her local machine

These include your financial transactions, placing an order, or data from your car sensor.

Apache Kafka is an open-source data-streaming solution that handles over 1 million records per second.

Alongside this high throughput, Apache Kafka provides high scalability and availability, low latency, and permanent storage.

terminal window showing zookeeper startup

Companies like LinkedIn, Uber, and Netflix rely on Apache Kafka for real-time processing and data streaming.

The downloaded content will be compressed in.tgzformat.

Once downloaded, youll have to extract the same.

terminal window showing the successful command for creating a kafka topic

If you are Linux, open your terminal.

Next, navigate to the location where you have downloaded the Apache Kafka compressed version.

trigger the following command:

After the command completes, youll find that a new directory calledkafka_2.13-3.5.0.

terminal window showing command for producing Kafka messages

For Windows users, you’re able to follow the same steps.

If you are unable to find thetarcommand, you may use a third-party tool likeWinZipto fire up the archive.

It does not have any installers.

terminal window showing Kafka consumer running

you could directly start using it via your command line or terminal window.

Before starting with Apache Kafka, ensure you have Java 8+ installed on your system.

Apache Kafka requires a running Java installation.

Screen capture of Java IDE showing the code and output of Kafka Producer

#1.trigger the Apache Zookeeper server

The first step is running Apache Zookeeper.

You get it pre-downloaded as a part of the archive.

Its a service that is responsible for maintaining configurations and providing synchronization for other services.

Screen capture of Java IDE showing the code and output of Kafka Consumer

The different values are present in theofficial documentation.

What are the steps to create a topic in Apache Kafka?

Before you create your first topic, lets understand what a topic actually is.

In Apache Kafka, a topic is a logical data store that helps in data streaming.

Think of it as the channel through which data is transported from one component to the other.

A topic supports multi-producers and multi-consumers more than one system can write and read from a topic.

Unlike other messaging systems, any message from a topic can be consumed more than once.

Additionally, you’ve got the option to also mention the retention period for your messages.

Lets take the example of a system (producer) that produces data for bank transactions.

And another system (consumer) consumes this data and sends an app notification to the user.

for facilitate this, a topic is required.

How to produce a message to Apache Kafka?

With your Apache Kafka topic ready, you could now produce your first message.

Next, ensure that youre in the proper directory where youve extracted the contents of the archive.

Write your first message and hit Enter.

Youve produced your first message to Apache Kafka on your local machine.

Subsequently, youre now ready to consume this message.

How to consume a message from Apache Kafka?

Apache Kafka allows you to attach multiple consumers to the same topic.

Each consumer can be part of a consumer group a logical identifier.

In that case, both of them will have the same consumer group.

In the terminal or command prompt window, ensure youre in the proper directory.

Youve now used Apache Kafka to consume your first message.

Thekafka-console-consumercommand takes a lot of arguments passed in.

Youll see that all of them are consumed and show up in your terminal.

Apache Kafka provides its own client library that allows you to connect seamlessly.

Once your library is in place, open up a code editor of your choice.

Lets see how you could start up your producer and consumer using Java.

Lets create a class calledSimpleProducer.java.

This will be responsible for producing messages onthe topic that you have created earlier.

Inside this class, youll be creating an instance oforg.apache.kafka.clients.producer.KafkaProducer.

Subsequently, youll use this producer to send your messages.

For creating the Kafka producer, you require the host and port of your Apache Kafka server.

Since youre running it on your local machine, the host will belocalhost.

Given that youve not changed the default properties when starting up the server, the port will be9092.

With the entire code in place, you could now send messages to your topic.

It internally uses theKafkaProducerto produce text messages on your topic.

Create Apache Kafka Java consumer

Its time to make an Apache Kafka consumer using the Java client.

Create a class calledSimpleConsumer.java.

Next, youll create a constructor for this class, which initializes theorg.apache.kafka.clients.consumer.KafkaConsumer.

For creating the consumer, you require the host and port where the Apache Kafka server runs.

Additionally, you require the Consumer Group as well as the topic you want to consume from.

Youll now be consuming the messages from your topic.

When you receive any Consumer Record, the message will be printed.

Test out your consumer in action using a main method.

Youll start a Java utility that will keep on consuming the topic and printing the messages.

Stop the Java software to terminate the consumer.

This is because theAUTO_OFFSET_RESET_CONFIGproperty has been set toearliest.

Youll see them being consumed and printed on the console.

With Apache Kafka setup on your local machine, you might explore all the different features that Kafka provides.

Just like its easy to set up locally, setting Apache Kafka for bigger applications is no big task.