That's not at all what I learned from working with microservices. The worst thing you can do is make sync calls between services, then you end up with high aggregate failure rates, and/or large request latency from retries as well as low uptime due to dependency couplings.
One-way async dataflows work much better. That way each microservice is up with all the (potentially delayed) data it needs to respond to requests.
If you have services with lots of dependencies on other services, then you probably ended up the worst of two worlds - distributed monolith.
Typically one-way or two-way is not a choice you can make as an engineer, most processes in your business require two-way, because they are initiated by a user and user wants a definitive response.
The "potentially delayed" approach is very tempting for engineers, but it should be exception rather than the rule.
It
- dilutes responsibilities between services (who is responsible for delays? is service A producing too many messages or is service B too slow to process them?)
- makes your SLA vague (message was processed 3 days later, do we treat it as downtime or not?)
- requires more infrastructure & processes (every service has a queue, dead-letter queue, and a process to deal with dead letters)
- requires a ton monitoring overhead (what delay is acceptable? how do we even measure delay? what if you have different SLA for different messages? we'll have a monitor per message type?)
- introduces a lot of unnecessary complexity and rules (how do you deal with TOCTOU, e.g. admin deactivates a user, but by the time the message gets processed he's no longer an admin)
- ruins user experience (we received your payment information, but we won't immediately tell you that it's wrong).
Despite it's downsides, potentially delayed approach can be a fine tradeoff when it saves you 7-8 digits per year. Most companies never reach this phase.
A lot of this depends on implementation and infrastructure which is of course an additional detail. In an example I'm recalling it was for a communications app that had services for content, users, groups, sending, and receiving. Sending a message would save content with an id, include user/group recipient ids and write to the send service with them. Each service if it accepts the request completes it unless the service is actually down. The user/group service seems like it could be a sync service, but actually the client caches a list of users/contacts or can search for them.
By abstracting content the only thing that needed to change for new types of content was the content service and clients. Abstracting recipients which can be users/groups, meant that the only service that needed to care about this detail was the one that replicates sent to inboxes in the receiving service. Because of the use of content ids and user/group ids, this is all small idempotent/immutable metadata. The system was complex (yet became manageable over time) and onboarding onto each service was immediate.
I think few have seen well-bounded microservices' contexts leading to the idea that it's a bad distributed monolith. Also worth remembering that advantages of microservices 'done right' is large scaling of developers and isolate failures.
One-way async dataflows work much better. That way each microservice is up with all the (potentially delayed) data it needs to respond to requests.