Over the last 5 years gRPC has become the default choice for microservice ecosystems. REST still carries the majority of public APIs. Which one fits which situation?
Here’s what I learned across 4 projects: gRPC on microservice work, REST on monoliths and public APIs.
What gRPC is, how it differs from REST
REST: JSON body over HTTP/1.1 or HTTP/2, endpoint equals URL. Everyone knows it.
gRPC: HTTP/2 plus Protocol Buffers (protobuf) binary format plus RPC-style method calls. Built by Google, CNCF standard.
Summary of differences:
| Feature | gRPC | REST |
|———|——|——|
| Transport | HTTP/2 | HTTP/1.1 or /2 |
| Payload | Binary (protobuf) | JSON/XML (text) |
| Contract | .proto file | OpenAPI (optional) |
| Streaming | Native (4 modes) | Limited (SSE / WebSocket separately) |
| Browser support | needs gRPC-Web | Native |
Where gRPC shines
Service-to-service communication. For backend-to-backend traffic, gRPC wins by a wide margin. Smaller payloads, serialization/deserialization 5 to 10x faster, HTTP/2 multiplexing cuts the number of connections. In a microservice environment, that performance gap turns into a real infrastructure bill difference.
Contract-driven development. The .proto file becomes the contract. You generate client and server code from the same .proto. Type safety, versioning, and breaking-change detection are built in. The .proto itself doubles as API documentation.
Streaming use cases. gRPC supports 4 kinds of streaming:
- Unary (standard request/response)
- Server streaming (server keeps pushing)
- Client streaming (client keeps pushing)
- Bidirectional streaming
Chat apps, real-time dashboards, telemetry ingestion, ML model serving, any stream-heavy use case feels right at home in gRPC.
Polyglot team. If you’re writing services in 5 different languages (Go, Python, Node, Java, Rust), the .proto plus protobuf compilers spit out matching client/server code in every language. No more hand-rolled SDKs.
Where REST shines
Public APIs. The world speaks REST. SDKs come in REST form, Postman tests REST easily, you can call it from a browser, developer onboarding is fast.
Browser-native support. Browsers can’t speak gRPC natively (HTTP/2 trailers aren’t supported). gRPC-Web via a proxy is an option, but that’s another infra layer. For a simple SPA, calling REST with fetch() is frictionless.
Debugging ergonomics. You test a REST request with cURL, headers are human-readable, the body is human-readable. With gRPC you’re looking at binary payloads, testing with grpcurl, wrestling with certs and metadata.
CDNs and caching. HTTP caching is native for REST, you can park it behind a CDN, ETag and If-None-Match negotiation come for free. gRPC sits outside that infrastructure.
Library ecosystem. WordPress plugins, mobile SDKs, third-party tool integrations, analytics tools all expect REST. Expose gRPC and your integration partners will complain.
My decision matrix
When I’m designing a new service I ask these questions:
- Who’s the consumer? Internal service, mobile app, or public developer?
- – Internal service: gRPC preferred
- – Mobile app: either works, REST is more common
- – Public API: REST (gRPC creates developer friction)
- Is performance critical?
- – Sub-millisecond latency required: gRPC
- – 50ms latency is fine: REST is enough
- Do you need streaming?
- – Yes: think gRPC from the start
- – No: REST works
- What does the team look like?
- – Polyglot, service-oriented: gRPC
- – WordPress / PHP heavy, single language: REST
- Tooling maturity?
- – Kubernetes plus service mesh (Istio/Linkerd) in place: gRPC slots in well with the mesh
- – Classic LAMP or single-server setup: REST is simpler
The hybrid approach is winning
At most larger companies the pattern looks like this:
- Internal service-to-service: gRPC
- Edge (public API, client-facing): REST (or GraphQL)
- A gateway layer bridges the two (gRPC backend, REST edge)
This setup pulls the strengths from both worlds. Performance from gRPC, ecosystem from REST.
The surprise costs of moving to gRPC
If a team wants to migrate to gRPC, these are the things to plan for:
Load balancers. HTTP/1.1 load balancers can’t balance gRPC traffic correctly (long-lived HTTP/2 connections break them). You need Envoy, Linkerd, or a gRPC-aware LB.
Deadline and retry semantics. In gRPC, deadlines have to propagate via context, and retries have to be idempotency-aware. It needs more care than REST.
Observability. You need OpenTelemetry instrumentation for tracing, Prometheus metrics (with gRPC-specific labels) for monitoring. The “just grep HTTP access logs” reflex from REST doesn’t work.
Breaking change avoidance. Never change field numbers in a .proto, and never release a removed field’s number without reserving it. If you do, wire format compatibility breaks.
Bottom line
gRPC vs REST stopped being an ideological debate a while ago; it’s a use-case call. Teams that buried REST too early find themselves isolated in the public API space. Teams that never try gRPC leave performance on the table in a microservice world.
Choose by consumer, by team, and by where the product is in its lifecycle. A dogmatic one-sided preference is often the decision you have to revisit 2 years later.