Why you can not just use gRPC in your production environment
In our Product Factory Team at itdesign, we love to experiment and to learn something new every day. Recently we started to use gRPC in a Proof of Concept to communicate between two of our Meisterplan microservices.
Why gRPC
All of our Meisterplan developers love types and validations. We have been using Protocol Buffers to send messages for some time now and learned to value the code generation and validation that comes with it. For communication between services we currently use mostly REST and some GraphQL API. With REST we run into multiple issues since we have to manually document each usage to not break anything. GraphQL on the other side provides validation but has some overhead. Additionally, it might expose endpoints that should not be visible from the outside since we use the Apollo Federation for all of our services. gRPC provides validation with at least for us well-known Protocol Buffers Format. It does not only allow us to validate that the exact endpoint/service exists but also the communication format for this service.
gRPC is a remote procedure call system. It uses HTTP/2 for transport. With Protocol Buffers the services are described. gRPC was initially developed by Google in 2015 and is now part of the Cloud Native Computing Foundation. It is adopted by a lot of big organizations like Netflix, Spotify, and Dropbox. The protocol and transport format allow for high-performing applications. Especially sending and parsing the data is very efficient compared to the usual JSON parsing done.
Try it out for yourself
We write our microservices in Kotlin using Spring Boot. There are two viable libraries, logNet/grpc-spring-boot-starter and yidongnan/grpc-spring-boot-starter, which provide good Spring Boot support and are regularly updated. Our preferred library is the latter one since it is well documented and provides nice support to write tests. Both are built on the stable grpc-java library. There is also a grpc-kotlin library which had its 1.0 release recently it adds coroutine support. Both libraries will startup a wrapped netty server. Since it uses its own transport protocol it needs an additional port to run it on. An example implementation of a server and client spring boot application can be found on GitHub.
Infrastructure prerequisites
There is one important thing which is very much different from all HTTP APIs. gRPC has sticky connections since it uses HTTP/2. This reduces a lot of overhead since handshakes do not need to be done for every request. On the other hand, this makes load balancing very tricky. If a connection is already established then horizontal scaling does not use the new servers by default. There are multiple blog posts that explain this in detail. Using the default Kubernetes load balancing you should consider using Linkerd as a proxy. Another idea is to reconnect the clients periodically. Otherwise, if the client is not controlled, a possibility is to forcefully disconnect the clients periodically. This partially defeats the benefit of reusable connections. Additional complex variants are described here.
Final thoughts
When the time comes and the performance between microservices is factored in more, we will surely be using gRPC for all server-to-server communications. Currently, we are not there yet and mostly the infrastructure prerequisites are a blocker for us “to just use gRPC in our production environment”.