Skip to content
ForceTricks
Back to blog

Salesforce Integration: Implementing Without Creating Technical Debt

5 min read
SeriesSalesforce Integration: From Basics to AdvancedPart 2 of 3
  1. 1Salesforce Integration: Patterns That Actually Work
  2. 2Salesforce Integration: Implementing Without Creating Technical Debt
  3. 3Salesforce Integration: Monitoring and What I Learned Breaking Things

In the previous post I covered the four main patterns. Here I get into the code — not copy-paste recipes, but the structures that make the difference between an integration that lasted two years and one that became an incident in the third month.

The Structure I Use for HTTP Callouts

Every synchronous HTTP callout in Apex should have three layers:

// 1. Service — business logic, no HTTP
public with sharing class InventoryService {
    public static InventoryResponse getInventory(Id productId) {
        Product2 product = [
            SELECT Id, ExternalId__c
            FROM Product2
            WHERE Id = :productId
            WITH USER_MODE
        ];
        return InventoryClient.get(product.ExternalId__c);
    }
}

// 2. Client — pure HTTP, no business logic
public without sharing class InventoryClient {
    public static InventoryResponse get(String externalId) {
        HttpRequest req = new HttpRequest();
        req.setEndpoint('callout:InventoryAPI/products/' + externalId);
        req.setMethod('GET');
        req.setTimeout(5000);

        HttpResponse res = new Http().send(req);
        if (res.getStatusCode() != 200) {
            throw new InventoryException('HTTP ' + res.getStatusCode());
        }
        return (InventoryResponse) JSON.deserialize(res.getBody(), InventoryResponse.class);
    }
}

// 3. DTO — response structure
public class InventoryResponse {
    public Integer quantity;
    public String unit;
    public Datetime updatedAt;
}

This separation seems obvious but rarely happens in practice. The real benefit: you can test InventoryService with a mock without touching HTTP, and you can swap out the InventoryClient implementation without touching business logic.

Platform Events: What Nobody Talks About Regarding Order

Platform Events don't guarantee delivery order. If you publish three events for the same record in quick succession, the consumer may receive them out of order.

For most cases this doesn't matter. But if you're synchronizing state — especially status transitions — this breaks things.

The ordering problem with Platform Events doesn't show up in tests. It shows up in production, on a high-volume day, when two events arrive inverted and the record ends up in an impossible state.

The solution I use: include a Timestamp__c in the payload and ignore events older than the last processed one.

public with sharing class OrderEventHandler {
    public static void process(List<Order_Updated__e> events) {
        Map<Id, OrderSync__c> lastProcessed = loadLastProcessed(events);

        for (Order_Updated__e evt : events) {
            OrderSync__c last = lastProcessed.get(evt.OrderId__c);
            if (last != null && evt.Timestamp__c <= last.LastTimestamp__c) {
                continue; // stale event, skip
            }
            processEvent(evt);
        }
    }
}

Retry: What Salesforce Does and What You Need to Do

Platform Events have automatic retry for 72 hours if the subscriber fails. This sounds sufficient — but it isn't for every scenario.

The automatic retry doesn't distinguish between a transient failure (external API down for 2 minutes) and a permanent failure (invalid payload that will always fail). Without explicit handling, you can end up retrying an event forever until the window expires.

public with sharing class OrderEventHandler {
    private static final Integer MAX_RETRIES = 3;

    public static void process(List<Order_Updated__e> events) {
        List<IntegrationLog__c> logs = new List<IntegrationLog__c>();

        for (Order_Updated__e evt : events) {
            try {
                sendToExternalSystem(evt);
                logs.add(logSuccess(evt));
            } catch (InventoryException e) {
                Integer retries = getRetryCount(evt.OrderId__c);
                if (retries >= MAX_RETRIES) {
                    logs.add(logPermanentFailure(evt, e.getMessage()));
                    notifyTeam(evt, e);
                } else {
                    logs.add(logRetryAttempt(evt, retries + 1));
                }
            }
        }
        if (!logs.isEmpty()) {
            insert logs;
        }
    }
}

Named Credentials and the Mistake That Always Happens at Go-Live

Every integration uses hardcoded endpoints in sandbox and Named Credentials in production — or should. The problem I see frequently: someone tests with a Named Credential configured for sandbox and forgets to create the equivalent for production. The deploy goes out, the Named Credential doesn't exist, and the integration explodes on first use.

Checklist before any integration go-live:

  • Named Credential created in production with the correct URL
  • Custom Setting or Custom Metadata with timeout parameters reviewed for production
  • Mock configured for tests (without this, @isTest + HTTP callout = error)
  • Callout permission granted to the integration user

What Can Still Go Wrong

Even with the right structure in place, there are traps that surface after go-live.

Governor limits in unexpected contexts. The InventoryClient called from a trigger via Future has one callout limit. Called from a Batch, it has another. If the same client gets reused across different execution contexts, the limit behavior changes — and the error that surfaces isn't obvious.

Mocks that hide real bugs. It's common to build a mock that always returns success to make tests pass. The problem: the mock doesn't test the client's behavior on failure, timeout, or malformed responses. Tests that only cover the happy path give false confidence.

Named Credential rotation without a notice. If the external system's credentials rotate (which is good security practice), the Named Credential has to be updated manually. There are no native alerts for an expired credential — the integration simply starts failing with 401.

Silent deserialization of optional fields. JSON.deserialize in Apex doesn't throw for missing fields — they come back null. If a field you treat as required is absent in a new API version, the code won't break at the deserialize line; it breaks later, when you try to use the null value.

Practical Takeaways

  • If you're structuring a new integration → separate Service, Client, and DTO from the first commit; refactoring this later is more work than it looks
  • If you use Platform Events to synchronize state → implement ordering control with Timestamp__c before going to production
  • If Salesforce's automatic retry is your only resilience mechanism → add explicit retry counting to distinguish transient from permanent failures
  • If go-live is coming up → validate Named Credentials in production with a test callout before releasing to users

In the last post of the series I cover monitoring — because finding out your integration is failing via a customer phone call is not a strategy.


Have a different structure that works better for you? Let me know on LinkedIn.

Gabriel Cruz Ferreira

Gabriel Cruz Ferreira

Salesforce Architect · 15x Certified · Road to CTA

Was this post helpful?