Home / Blog / Microservice granularity: how small is small enough?

Microservice granularity: how small is small enough?

Go too big or too small on microservices and both bite you. Here are the criteria I use and real examples from production projects.

A startup was arguing architecture. The CTO wanted “a microservice per endpoint”. We were four engineers, and by the math that meant forty services. Four people, forty services. That was the first time I said out loud “this is insane”. Here’s what I’ve learned about microservice granularity on real projects.

The too-small trap (nano-services)

A service per individual function. “Email validation service”, “password hash service”, “phone format service”. Each with its own deploy, its own repo, its own tests, its own monitoring.

What goes wrong:
– Network calls explode. A single request bounces through 10 services and the latency compounds.
– Distributed tracing becomes mandatory, debugging gets harder.
– Most of the team is writing inter-service plumbing instead of product.
– Deploy coordination is impossible. Everything gets versioned APIs and you spend your life avoiding breaking changes.
– Shared code either gets duplicated or becomes a shared library, and the shared library drags every service with it on every change.

The too-big trap (monolith-ish microservice)

One service covering 10 different domains. “User service” holds auth, profile, billing, notification preferences, avatar upload. 30 endpoints, 15 tables.

What goes wrong:
– Every deploy touches a large surface.
– Multiple teams send PRs to the same service and conflict a lot.
– Scaling isn’t granular. Billing traffic spikes and you’re scaling the whole service.
– Test runs get long, CI gets slow.

Criteria for the right granularity

There’s no single rule. The criteria I use:

  1. Bounded context (DDD). Services split along domain boundaries. Authentication, billing, inventory, notification. Each with its own domain language. It’s safer to keep a domain together than to slice it.
  2. Team ownership. If two teams own the same service, something’s already wrong. “You build it, you run it.” The number of services can exceed the number of teams, but a team shouldn’t own more than 3 to 5.
  3. Deploy cadence. Parts that deploy at the same frequency can live in the same service. Payment flow deploys weekly, notification flow deploys daily. Splitting those makes sense.
  4. Failure domain. One part failing should not take down another. If checkout crashes, search should still work.
  5. Different scaling needs? Image processing is CPU-heavy, a regular API is I/O-heavy. If you need to scale them separately, they’re separate services.

Concrete example: e-commerce

For a 10 to 15 person team I’d ballpark 8 to 12 microservices:

  • Auth (login, registration, token, password reset)
  • User (profile, preferences, address book)
  • Catalog (product, category, search index)
  • Inventory (stock, warehouse, availability)
  • Cart (basket state)
  • Checkout (order creation, payment orchestration)
  • Payment (gateway integration, refund)
  • Fulfillment (shipment, tracking)
  • Notification (email, SMS, push)
  • Review (ratings, comments)
  • Recommendation (personalisation)
  • Analytics (events, reporting)

Even if each one isn’t owned by a separate team, they’re thematically separate. Auth and User look close, and the same team often owns both, but splitting them earns you independent scaling later.

The “modular monolith” alternative

Important caveat: microservices shouldn’t be the first move on every project. For most startups, a modular monolith is a better fit. Slice the code by domain, but ship it as one deployment. As you grow, you decide which module earns its own service.

Signals that it’s time to split:

  • One module scales at a very different rate from the others.
  • One module needs a different stack (Python ML vs Node.js web).
  • The team has grown and a subteam now owns one module alone.
  • Deploy cycles are starting to clash.

Migration strategy

Going from monolith to microservices via the strangler fig pattern works well. New features are born as microservices, existing modules peel off over time. Big-bang rewrites usually fail.

Measure: what does each service cost?

Every microservice carries a cost:

  • Repo, CI/CD pipeline
  • Monitoring, logging, alerting
  • Deployment, rollback
  • Documentation
  • Runbook
  • On-call
  • Inter-service authentication

I put the yearly operational cost (dev time plus infra) of one service at around $15 to 25k. If the benefit doesn’t clear that, it shouldn’t be its own service.

Conway’s Law

Architecture mirrors your org’s communication structure. However your teams are organised, that’s how your services end up. When you’re drawing service boundaries, “can this team own this?” is a key question.

Takeaway

Microservices are a tool, not a religion. The right granularity is different in every project. Weigh team, business requirements, and scale together. Don’t bring “small is better” or “big is better” as a fixed belief, decide per context.

Have a project on this topic?

Leave a brief summary — I’ll get back to you within 24 hours.

Get in touch