First, decide if you should migrate at all

Microservices are not a goal. They are a trade-off. They add operational complexity, network latency, distributed-systems failure modes, and organizational coordination cost. In exchange, they can unlock independent deployment, isolated scaling, technology heterogeneity, and team autonomy. Before drafting a single decomposition diagram, ask whether you actually need any of those benefits.

The honest test is simple. Look at the friction your monolith causes today. Are deployments getting slower? Are teams blocked by each other's release schedules? Are some parts of the system scaling differently than others? Do you have a part of the codebase that should evolve with a different stack? If the answer is no, the monolith is probably fine. A well-modularized monolith with clean internal boundaries can outperform a poorly designed distributed system for years.

If the answer is yes, microservices are one valid response, but they are not the only one. A modular monolith with strict module boundaries, separate deployment artifacts, and clear ownership often delivers most of the benefit at a fraction of the cost. We have moved teams to that intermediate step many times before deciding whether full decomposition was needed. It bought clarity without buying complexity.

Find the bounded contexts before naming the services

The biggest mistake in microservices migrations is splitting by technical concern (the "user-service", the "auth-service", the "database-service") instead of by business concern. Services that share a domain bleed into each other forever. The right unit of decomposition is the bounded context from Domain-Driven Design: a section of the business that has its own language, its own rules, its own model.

For a logistics company, contexts may be "shipment", "billing", "warehouse", "route planning", and "customer notifications". For a retail platform, they may be "catalog", "pricing", "inventory", "checkout", and "fulfillment". The right names come from the business, not from the codebase. If two teams describe the same entity in slightly different ways, that is usually a sign of two contexts hiding inside one model.

A practical first exercise is event storming: gather the engineers and the people closest to the business in one room (virtual or physical) and walk through the lifecycle of a real customer event. List the domain events ("OrderPlaced", "PaymentAuthorized", "ShipmentDispatched"), then group them. The natural clusters are usually your future services.

Signals you found a real bounded context

  • It owns a clear set of business rules that the rest of the system does not need to know about.
  • It has its own vocabulary. The same word means slightly different things outside it.
  • It can publish events that other contexts react to, instead of being called synchronously every time.
  • A single team could own it end to end without coordinating daily with every other team.

Apply the strangler fig pattern instead of a big rewrite

The classic strangler fig pattern, named by Martin Fowler, is the safest way to migrate a monolith. The idea is simple: you do not rewrite the system. You place a routing layer in front of it, then peel off one capability at a time into a new service. Over time, the new system grows and the monolith shrinks, until the monolith is decommissioned.

This works because every step delivers value. You never carry a multi-quarter rewrite on your back. You never tell the business "wait twelve months and you will see something." Every increment is in production, observable, reversible.

Practically, the strangler fig usually starts with an API gateway, a reverse proxy, or a feature flag in front of the existing application. New routes are sent to the new service; existing routes continue to hit the monolith. As confidence grows, more routes shift. If a new service misbehaves, traffic can be rolled back to the monolith with a configuration change, not a release.

The first capability to extract is rarely the most obvious one. Resist the temptation to start with the "biggest mess". Start with something that has a clear bounded context, a low blast radius if it fails, and meaningful business value when extracted. Notification systems, search, recommendations, audit logs, and reporting are common first candidates.

The hardest part is the data

Code is the easy part of a decomposition. Data is where projects get stuck. A shared database is the most common anti-pattern: two services that supposedly own different domains, but secretly join the same tables, defeat the purpose of decomposition. They will deploy together. They will fail together. They will evolve together. They are not microservices.

The principle is that each service should own its data. Other services read it through APIs or events, not by reaching into the database. This is uncomfortable at first because the monolith probably has hundreds of joins across what should be different domains. Untangling them is the real work.

A useful pattern is the "expand, migrate, contract" approach. First, expand the data model by adding the new shape side by side with the old one. Then migrate consumers one at a time to the new shape. Finally, contract by deleting the old structure. At every step, the system keeps running. Nothing is rewritten in one shot.

For high-volume systems, event-carried state transfer is often the cleanest way to keep services independent without nightly batch jobs. The owning service publishes events whenever its data changes; downstream services keep their own local read model updated. This is asynchronous, scalable, and resilient to outages, but it requires careful thinking about ordering, idempotency, and eventual consistency.

Conway's law is undefeated

"Organizations design systems that mirror their communication structure." This is Conway's law, and it is the most underestimated force in microservices migrations. If you split the system into ten services but keep one team owning the whole thing, you have a distributed monolith. Worse, you now own the operational complexity of microservices without any of the autonomy benefits.

A successful decomposition almost always pairs with an organizational change. Each bounded context should map to a team that can own it end to end: design, development, deployment, monitoring, and on-call. If the team is too small to own it sustainably, the context is probably too small. If the team is too large to coordinate quickly, the context is probably too large.

This is also where leadership matters. The migration cannot succeed without explicit decisions about ownership, accountability, and shared platform investment. Without those, every team will redo the same plumbing, every service will reinvent observability, and the cost will explode.

Metrics to track migration success

Microservices migrations should be tracked with engineering metrics, not service counts. The number of services is not a goal. The goal is faster delivery, better reliability, and clearer ownership. Useful indicators include:

  • Deployment frequency. Are independent services deploying multiple times per week, or are they still chained to a quarterly release?
  • Lead time for change. From code commit to production, how long does a small fix take?
  • Change failure rate. Of deployments, how many require rollback or hotfix?
  • Mean time to recovery. When something breaks, how fast can the team detect and resolve it?
  • Coupling indicators. How often does a change in one service require a coordinated deploy with another? This number should fall over time.

These metrics, popularized by the DORA research, are reliable proxies for whether the decomposition is actually paying off. If they improve, the migration is healthy. If they do not, more services will not fix the problem.

Common mistakes we see in real migrations

Over years of helping teams modernize, the same handful of mistakes appear over and over. They are worth naming clearly so you can spot them in your own program.

Splitting by table

Carving services along database tables instead of business capabilities. You end up with anemic services that always call each other to do anything useful.

Shared database

Two services reading and writing the same tables. You did not decompose anything; you just added network hops.

Distributed monolith

Services that must always be deployed together. Worst of both worlds: complexity of distributed systems with the coupling of a monolith.

Big-bang rewrite

Building the new platform on the side for a year before going live. Risk concentrates at the cutover. Use strangler fig instead.

No platform investment

Every team builds their own logging, CI, secrets management, and tracing. The cost of operating ten services becomes unbearable.

No clear ownership

Services with no owning team turn into orphans, then incidents, then technical debt that nobody wants to touch.

Final takeaway

A good migration is not measured by how many services exist at the end. It is measured by how much faster the business can change. If your decomposition is making delivery slower, the architecture is wrong, regardless of how clean the diagrams look.

Planning a decomposition or microservices migration?

If you want help designing a realistic decomposition roadmap, identifying bounded contexts, and avoiding the most expensive mistakes, we would be glad to talk.

Talk to Soutello IT about your modernization