gRPC vs Kafka | Software.Land

gRPC vs Kafka

Overview

When comparing gRPC and Kafka, we’re likely comparing gRPC’s streaming features rather than its unary RPC, which take either asynchronous or synchronous Request/Response forms of communication.

The broad strokes answer when deciding between Kafka and gRPC Streaming:

  • If latency is the most important factor, choose gRPC Streaming: gRPC Streaming doesn’t require anything like Kafka’s intermediary message broker between a client and server. Its streams are direct between client and server.
  • If reliability out-of-the-box is the most important factor, choose Kafka: gRPC Streaming doesn’t provide application-level guaranteed delivery. In contrast, Kafka not only guarantees message delivery at the application level, but also persists messages on its distributed message broker for a default of 14 days.

For more detail, continue reading:

Table of Contents

Comparison
gRPC Streaming
Kafka
Conclusion

Comparison
^

gRPC StreamingKafka
Use Case- Direct communication between services.
- Less resource overhead.
- Technical aptitude needs to be high to handle all edge cases.
- Indirect communication between services.
- More resource overhead.
- Relatively less technical aptitude necessary for all edge cases.
LatencyLow latency due to direct communication without intermediary hops (single digit millisecond).Generally low latency (single digit millisecond), but can increase if producers produce more than consumers can consume.
ThroughputThroughput is entirely dependent on the infrastructure surrounding it.Extremely high.
Guaranteed DeliveryApplication-level guarantees need to be implemented despite TCP-level guarantees.Kafka provides a mechanism for consumers to commit the offset of the message queue they're consuming after processing a batch.
ScalabilityThe gRPC per-language client libraries do provide their own client-side load balancing. Server-side load balancing requires additional infrastructure (e.g. load balancer).Kafka's design for enabling configurable, scalable, and persistent message brokers through its topic partitions enables highly variable throughput on both the producer and consumer sides.

Table template created by ChatGPT of OpenAI.

Note: Learn more about Throughput vs Latency.

gRPC Streaming
^

gRPC is touched on in gRPC vs REST. When comparing it to REST, we compare its unary RPC feature. However, when comparing it to Kafka, we compare its streaming features. There are three different streaming modes:

  • Server Streaming RPC: Server sends a stream of messages in response to a client’s request.
  • Client Streaming RPC: Client sends a stream of messages to the server.
  • Bidirectional Streaming RPC: Call is initiated by the client that opens a two-way stream where both sides can send messages independently.

Other than the differences mentioned in the Overview, an important takeaway is that more work is required to build a high-scale gRPC stream. If a use case’s throughput is highly variable, building a solution that scales elastically and does not drop messages is a challenge. Kafka has this problem solved with its intermediary broker (message queues).

Kafka
^

Note that Kafka and Kafka Streams operate within the same infrastructure framework, but they serve different purposes. Kafka is a distributed streaming platform that enables you to reliably stream messages from producers to consumers via an intermediary message broker in a highly fault-tolerant manner. Kafka Streams uses the same Kafka infrastructure, but simply provides a higher-level, more user-friendly client library to handle many technical details under-the-hood.

Other than the differences mentioned in the Overview, Kafka excels where gRPC Streaming falters — at very high scale where throughput is highly variable. Kafka’s message broker (message queues) stores messages for days, so if consumers fail to consume, they can continue once they’re back up.

Conclusion
^

It’s unusual to introduce gRPC Streaming into a distributed system where Kafka (or other message broker) is widely used unless a use case arises where low latency is of upmost importance (such as video/audio chat).


Author

Sam Malayek

Sam Malayek works in Vancouver, using this space to fill in a few gaps. Opinions are his own.