gRPC vs Kafka
February 16, 2024
Overview
When comparing gRPC and Kafka, we're likely comparing gRPC's streaming features. gRPC's unary RPC takes either asynchronous or synchronous forms of communication (in a Request/Response architectural paradigm). Kafka and gRPC Streaming are both examples of Event-Driven Architecture. There's typically an expectation of an immediate response in a Request/Response journey, whereas Event-Driven request journeys are typically send-and-forget (with journey durations anywhere from milliseconds to days in distributed systems). Acknowledgements of a triggered Event-Driven journey are common (as well as follow-up event responses).
The broad strokes answer when deciding between Kafka and gRPC Streaming:
- If latency is the most important factor, choose gRPC Streaming: gRPC Streaming doesn't require anything like Kafka's intermediary message broker between a client and server. Its streams are direct between client and server.
- If reliability out-of-the-box is the most important factor, choose Kafka: gRPC Streaming doesn't provide application-level guaranteed delivery. In contrast, Kafka not only guarantees message delivery at the application level, but also persists messages on its distributed message broker for a default of 14 days.
For more detail, continue reading:
Table of Contents
Comparison
gRPC Streaming | Kafka | |
---|---|---|
Use Case | - Direct communication between services. - Less resource overhead. - Technical aptitude needs to be high to handle all edge cases. | - Indirect communication between services. - More resource overhead. - Relatively less technical aptitude necessary for all edge cases. |
Latency | Low latency due to direct communication without intermediary hops (single digit millisecond). | Generally low latency (single digit millisecond), but can increase if producers produce more than consumers can consume. |
Throughput | Throughput is entirely dependent on the infrastructure surrounding it. | Extremely high. |
Guaranteed Delivery | Application-level guarantees need to be implemented despite TCP-level guarantees. | Kafka provides a mechanism for consumers to commit the offset of the message queue they're consuming after processing a batch. |
Scalability | The gRPC per-language client libraries do provide their own client-side load balancing. Server-side load balancing requires additional infrastructure (e.g. load balancer). | Kafka's design for enabling configurable, scalable, and persistent message brokers through its topic partitions enables highly variable throughput on both the producer and consumer sides. |
Table template created by ChatGPT of OpenAI.
Note: Learn more about Throughput vs Latency.
gRPC Streaming
gRPC is touched on in gRPC vs REST. When comparing it to REST, we compare its unary RPC feature. However, when comparing it to Kafka, we compare its streaming features. There are three different streaming modes:
- Server Streaming RPC: Server sends a stream of messages in response to a client's request.
- Client Streaming RPC: Client sends a stream of messages to the server.
- Bidirectional Streaming RPC: Call is initiated by the client that opens a two-way stream where both sides can send messages independently.
Other than the differences mentioned in the Overview, an important takeaway is that more work is required to build a high-scale gRPC stream. If a use case's throughput is highly variable, building a solution that scales elastically and does not drop messages is a challenge. Kafka has this problem solved with its intermediary broker (message queues).
Kafka
Note that Kafka and Kafka Streams operate within the same infrastructure framework, but they serve different purposes. Kafka is a distributed streaming platform that enables you to reliably stream messages from producers to consumers via an intermediary message broker in a highly fault-tolerant manner. Kafka Streams uses the same Kafka infrastructure, but simply provides a higher-level, more user-friendly client library to handle many technical details under-the-hood.
Other than the differences mentioned in the Overview, Kafka excels where gRPC Streaming falters -- at very high scale where throughput is highly variable. Kafka's message broker (message queues) stores messages for days, so if consumers fail to consume, they can continue once they're back up.
Conclusion
It's unusual to introduce gRPC Streaming into a distributed system where Kafka (or other message broker) is widely used unless a use case arises where low latency is of upmost importance (such as video/audio chat). On the other hand, gRPC Streaming is the better option if most of the following are true:
- The technical aptitude exists.
- The feature's scale is not high.
- The feature's traffic is not highly variable.
- The overhead of introducing Kafka is too high.
Related blog posts:
Updated: 2024-06-16