You have a backend service, and both the web frontend and the mobile app consume it. Their needs are different. Web wants richer data, mobile cares about data efficiency. You end up compromising, and neither side is happy.
The BFF (Backend for Frontend) pattern solves this. A dedicated backend layer per client. Here’s how I apply it in practice.
Problem: one API for all clients
Typical architecture:
[Mobile App] → [API] ← [Web Frontend]One API for mobile and web. Endpoint responses are generic.
Mobile needs:
– Minimal payload (data volume matters)
– Specific fields (avatar thumbnail, not full size)
– Combined endpoints (fewer round trips)
– Offline-first considerations
Web needs:
– Rich data (dashboards, detail views)
– SEO considerations (SSR data)
– Complete information
– Multiple endpoints are fine
One-size-fits-all APIs force compromise. Usually mobile over-fetches and wastes bandwidth. Or web under-fetches and does multiple round trips.
BFF pattern
A separate backend layer per client:
[Mobile App] → [Mobile BFF] ↘
[Core Services]
[Web Frontend] → [Web BFF] ↗Mobile BFF is tuned for mobile needs. Web BFF for web. Both consume the core services.
Pros:
– Optimal API per client
– Client teams can own their BFF
– Breaking changes stay in the relevant BFF
– Platform-specific optimization
Cons:
– Extra infrastructure (two BFFs to deploy, monitor, scale)
– Code duplication (same business logic in two places)
– Operational complexity
When BFF makes sense
Adds value:
- Multiple clients with significantly different needs. Mobile, web, TV, kiosk, smart watch.
- Strong platform teams. Mobile team ownership. Web team owns its side.
- Independent deployment cycles. Mobile weekly, web daily.
- Performance critical per client. Mobile data sensitivity vs web richness.
Doesn’t add value:
- Single client (web only or mobile only).
- Similar client needs.
- Small team (BFF maintenance overhead).
- Stable, slow-changing API.
Real example: e-commerce
I applied the BFF pattern on an e-commerce project.
Core services:
– ProductService
– OrderService
– UserService
– PaymentService
Mobile BFF endpoint: /mobile/products/:id
{
"id": 123,
"name": "iPhone 15",
"price": 45000,
"image": "https://cdn.../thumb-400.webp",
"rating": 4.5,
"stockStatus": "available"
}Eight fields. A 200 byte payload.
Web BFF endpoint: /web/products/:id
{
"id": 123,
"name": "iPhone 15",
"description": "Lorem ipsum...",
"price": 45000,
"oldPrice": 50000,
"discount": 10,
"images": [
{"url": "...full-1920.webp", "alt": "..."},
// 10+ images
],
"specifications": { /* 50 fields */ },
"relatedProducts": [ /* 10 items */ ],
"reviews": [ /* 20 reviews */ ],
"breadcrumbs": [ /* SEO */ ]
}30+ fields. 5KB+ payload.
Both BFFs consume ProductService, ReviewService, RecommendationService. The aggregation is what differs.
BFF architecture
How a BFF actually works:
- Client request lands at the BFF
- BFF fans out parallel calls to downstream services
- It aggregates the responses into a client-optimized shape
- Returns a single response to the client
Pseudocode:
// Mobile BFF
async function getProductDetails(productId) {
const [product, stock, rating] = await Promise.all([
productService.get(productId),
inventoryService.getStock(productId),
reviewService.getAverageRating(productId)
]);
return {
id: product.id,
name: product.name,
price: product.price,
image: product.thumbnail, // small version only
rating: rating.average,
stockStatus: stock > 0 ? 'available' : 'out_of_stock'
};
}Three downstream calls in parallel. One optimized response.
Tech stack choices
Which stack for the BFF?
Node.js/Express: Most common. Shared JavaScript with the frontend. Fast iteration.
GraphQL layer (Apollo): BFF-like functionality. Return only the fields the client queries for. Especially strong for web.
Go/Rust: Performance-critical BFFs. When latency is paramount.
Kotlin/Spring: For mobile BFFs, the Android team tends to feel at home.
For most projects, Node.js is the pragmatic choice. Team skillset matters.
BFF deployment
Option 1: alongside clients. Mobile team owns the mobile BFF repo, web team owns the web BFF repo. Platform team ownership.
Option 2: central infrastructure team. BFFs live in a separate team. Client teams only consume.
I prefer option 1. Keep ownership with the client team.
Deployment pattern:
– Mobile BFF behind an API gateway, multiple instances, horizontal scale
– Web BFF sits next to the web server (along with SSR), edge deployment (Cloudflare Workers) optional
Common antipatterns
1. Business logic inside the BFF. The BFF is an aggregation layer. Business rules belong in core services. If business logic leaks into the BFF, you get inconsistency across BFFs.
2. Database access from the BFF. It should go through core services. If the BFF talks to the DB directly, you bypass the core service’s database.
3. Coupling between BFFs. Mobile BFF calling Web BFF. The dependency graph turns into a mess.
4. Too many BFFs. Five clients, five BFFs. Maintenance load grows exponentially.
BFF versioning
Mobile apps are versioned. So is the backend BFF.
/mobile/v1/products/:id
/mobile/v2/products/:idSupport v1 until it can be deprecated. Mobile app updates are slow, so older clients still hit v1.
Web usually stays on a single latest version. It updates with each web deploy.
Performance optimization
BFF performance matters. The latency the client sees is BFF plus downstream.
Tactics:
- Request parallelization. Whenever possible, fire downstream calls in parallel.
- Caching. Redis cache in the BFF, cache downstream responses.
- Connection pooling. Pool connections to downstream services.
- Circuit breaker. If downstream is slow, return a degraded response.
- Edge deployment. Web BFF at the CDN edge (Cloudflare Workers) for lower latency.
Monitoring
What to monitor on BFFs:
- Request rate per client. Mobile vs web.
- Latency p50/p95/p99. Per endpoint.
- Downstream service response times. Which core service is slow?
- Error rate. Client-side vs server-side.
- Cache hit rate. Is caching actually helping?
Grafana dashboards per BFF. Automated anomaly detection.
Wrap-up
The BFF pattern improves both performance and developer experience for multi-client systems. Optimal API per client.
Cost: extra infrastructure, operational complexity. Overkill on a small team. Essential on a large team with multiple clients.
GraphQL is a modern variant of the BFF. Return only the fields the client queries. Similar benefits without writing a dedicated BFF.
When deciding, look at client diversity, team structure, and performance requirements.