Integration is where most Salesforce projects break. Not during discovery, not in UAT — they break in production, six months later, when volume has tripled and nobody remembers why that HTTP call is sitting there without a retry.
This is the first post in a three-part series. I start with patterns because getting this wrong is expensive: refactoring a synchronous integration to asynchronous after it's live is a special kind of pain.
The Four Main Patterns
Every Salesforce integration fits into one of these four patterns — or a combination of them.
Synchronous Request-Response
The user clicks, Salesforce calls the external system, waits for the response, and continues. Simple to understand, simple to debug.
The problem: the external system controls the response time. If it takes 8 seconds in an Apex callout, you're two seconds from the UI transaction timeout (10s). And a silent timeout is worse than an explicit error.
Use when: the response is immediately required to continue the flow (CPF validation (Brazil's national tax ID), real-time inventory lookup).
Asynchronous Queue (Platform Events / Pub-Sub)
Salesforce publishes an event; the consumer processes it when ready. The producer doesn't wait.
This is the pattern I recommend most for write integrations. High volume, isolated failure, natural retry. The trade-off is that you lose the immediate response — if the external system rejects the record, you'll find out later.
Use when: volume is high, latency doesn't need to be zero, and you can handle asynchronous failure.
Scheduled Batch
Classic ETL. An Apex job or Data Loader runs on a fixed schedule and syncs data in bulk. Predictable, cheap, easy to monitor.
The real problem: the stale-data window. If the batch runs every hour, you have up to 59 minutes of divergence between systems. For most reference data that's acceptable. For dynamic inventory or pricing, it isn't.
Use when: data doesn't need to be in sync in real time and volume makes individual calls impractical.
Change Data Capture (CDC)
Salesforce emits an event every time a record is created, updated, or deleted. The external consumer subscribes to the channel and reacts to the change.
Extremely powerful for keeping external systems in sync without polling. The limitation is that CDC only works bidirectionally if the external system also supports streaming — otherwise you still need a separate write path.
Use when: you need an external system to react to Salesforce changes in near-real-time.
The One Question That Resolves 80% of Pattern Decisions
Who needs the response, and when?
If it's the user, right now, on the same screen: synchronous.
If it's a downstream process, timing flexible: async or batch.
If it's an external system that needs to mirror Salesforce: CDC.
Trade-offs the Documentation Leaves Out
| Pattern | Common blind spot |
|---|---|
| Synchronous | No native circuit breaker — an unstable API takes down the UX |
| Platform Events | Delivery order not guaranteed by default |
| Batch | Job failure doesn't auto-retry |
| CDC | 72-hour replay window — after that, the event is gone |
What the Pattern Doesn't Solve
Picking the right pattern is necessary, but not sufficient. There are problems none of the four patterns solve on their own.
Eventual consistency is a business state, not just a technical one. If you choose async, product and business teams need to accept that there's a window where systems are out of sync. This seems obvious on paper and becomes a surprise in production when users see different data across systems.
No pattern replaces a well-defined API contract. Platform Events, CDC, or synchronous calls — if the external system changes its schema without notice, any implementation breaks. The pattern determines how the integration fails; the contract determines whether it fails.
Authentication and authorization sit outside the pattern. OAuth, certificates, Named Credentials — that's a separate layer that needs to be designed independently of whichever pattern you choose.
Historical reprocessing isn't free. With batch you can re-run. With CDC, after 72 hours the replay window closes. With Platform Events, the same. If you need to reprocess a week's worth of data after a bug, the answer depends on the pattern — and sometimes there's no good answer.
Practical Takeaways
- If the response is required for the user to complete an action → synchronous, but design the timeout explicitly
- If volume is high or latency can be managed → Platform Events; invest in failure handling from the start
- If the data is reference data with slow change cadence → scheduled batch is sufficient and easier to operate
- If you need to keep an external system in sync with Salesforce → CDC, but verify the consumer supports streaming before committing to the architecture
In the next post I get into implementation: how to structure Apex code for each of these patterns without turning the org into a ticking time bomb.
Which pattern do you use most? Let me know on LinkedIn.