Skip to content

API-First: How to Design Services Built for Integration

A
abemon
| | 11 min read | Written by practitioners
Share

The API is the product

There’s a moment in every tech company’s life when someone says “we need an API.” Usually it happens when a client asks to integrate, a partner needs data, or the frontend team is fed up with the backend changing without notice. What should have happened is the API existing from day one.

API-first isn’t a trend. It’s an architecture decision that says: the API contract gets designed before the implementation. Not the other way around. Not as a byproduct of code. Not as a layer bolted on at the end. Contract first, implementation second.

Why does it matter? Because when the API is an afterthought, it reflects the system’s internal quirks. Field names that only make sense if you know the data model. Errors returning stack traces in production. Inconsistent pagination across endpoints. Versions breaking without warning. We’ve encountered all of these in real ERP integrations and third-party systems, and the cost in debugging hours and frustration is enormous.

Contract-first with OpenAPI

OpenAPI (formerly Swagger) is the de facto standard for describing REST APIs. An OpenAPI file is a formal contract defining endpoints, parameters, request/response schemas, error codes, and authentication. Writing it before implementing yields concrete benefits:

Work parallelization. With the contract defined, frontend and backend work simultaneously. Frontend generates mocks from the spec. Backend implements against the contract. QA writes tests against the contract. Nobody waits for anyone.

Code generation. Tools like openapi-generator produce client SDKs in 40+ languages, server stubs, and TypeScript types directly from the spec. The code isn’t perfect, but it eliminates manual writing of DTOs, validations, and HTTP clients.

Automatic validation. Middleware like express-openapi-validator (Node.js) or connexion (Python) validates requests and responses against the spec at runtime. If an endpoint returns a field not in the contract, it fails. This catches spec-code inconsistencies before they reach production.

Structure of a good OpenAPI spec

openapi: 3.1.0
info:
  title: Shipments API
  version: 2.0.0
  description: API for managing shipment lifecycle
servers:
  - url: https://api.example.com/v2
    description: Production
  - url: https://staging-api.example.com/v2
    description: Staging
paths:
  /shipments:
    get:
      operationId: listShipments
      summary: List shipments with pagination and filters
      parameters:
        - $ref: '#/components/parameters/PageCursor'
        - $ref: '#/components/parameters/PageSize'
        - name: status
          in: query
          schema:
            $ref: '#/components/schemas/ShipmentStatus'
      responses:
        '200':
          description: Paginated list of shipments
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/ShipmentList'
        '401':
          $ref: '#/components/responses/Unauthorized'

Three things matter here:

  1. operationId on every endpoint. It’s what the code generator uses to name methods. Without it, generated names are unreadable.
  2. Reusable components ($ref). Pagination, common errors, shared schemas. Define once, use everywhere. Prevents the inconsistency that creeps in when you copy-paste schemas.
  3. Explicit servers. The spec should work in Swagger UI without modifications. An external developer should be able to point to staging and test immediately.

Versioning: the decision you’ll pay for years

API versioning is one of those decisions that seems minor at first and becomes a monumental headache later. Three main strategies:

URL versioning (/v1/shipments, /v2/shipments): The most explicit, and what we recommend for public APIs. The consumer knows exactly which version they’re using. Routing is trivial. The downside: you maintain multiple deployed versions.

Header versioning (Accept: application/vnd.api+json;version=2): Conceptually cleaner, but harder to discover and test with simple tools. Works well for internal APIs with controlled consumers.

No explicit versioning (compatible evolution): You only make backwards-compatible changes. Never break. Works if you have iron discipline and few consumers. Becomes impractical when you need structural changes.

Our recommendation for most teams: URL versioning with a clear deprecation policy.

Deprecation policy

A deprecation policy is a social contract with your consumers. Ours is straightforward:

  • Minimum 6 months between deprecation announcement and version removal.
  • Announcement via Sunset header (RFC 8594) on all responses from the deprecated version.
  • Announcement via email to registered consumers.
  • Dashboard with usage metrics per version to know when removal is safe.

Without a deprecation policy, you end up maintaining phantom versions that nobody uses (or that one critical consumer uses without your knowledge).

Errors: the forgotten design

Error handling is where most APIs fail. Not technically (they return a 500 and call it a day), but in the usefulness of information they provide to consumers.

Standard error structure

Adopting a standard error format like RFC 7807 (Problem Details for HTTP APIs) saves headaches:

{
  "type": "https://api.example.com/errors/validation-failed",
  "title": "Validation Failed",
  "status": 422,
  "detail": "The shipment weight exceeds the maximum for the selected carrier.",
  "instance": "/shipments/abc-123",
  "errors": [
    {
      "field": "weight",
      "message": "Must be less than 30000 grams for carrier DHL Express",
      "value": 35000
    }
  ]
}

Key points:

  • type is a URI that uniquely identifies the error type. It lets consumers program type-specific logic without parsing text messages.
  • detail is human-readable. What the developer will see in their console when something fails.
  • errors (extension) provides field-level detail for validation errors. The frontend can show the error next to the correct field without guessing.

HTTP codes: using them correctly

Sounds basic, but 30-40% of the APIs we integrate with use HTTP codes wrong:

CodeActual meaningCommon mistake
400Malformed request (invalid JSON, missing field)Using for business logic
401Not authenticatedConfusing with 403
403Authenticated but not authorizedConfusing with 401
404Resource doesn’t existReturning for logic errors
409Conflict (resource already exists, invalid state)Not using it
422Semantic validation failed (data is valid JSON but violates business rules)Using 400 for everything
429Rate limit exceededNot implementing rate limiting
500Internal server errorReturning with stack traces

If you take away one thing from this section: 400 means “I can’t parse your request” and 422 means “I can parse it but it doesn’t make business sense.” The distinction matters because consumers can retry a 500 but not a 422.

Pagination

Three patterns, one clear winner:

Offset-based (?page=3&per_page=20): Simple, intuitive, and broken. If records are inserted or deleted while the consumer paginates, results get skipped or duplicated. For frequently changing data (orders, events), it’s unacceptable.

Cursor-based (?cursor=eyJpZCI6MTIzfQ&limit=20): Consistent regardless of inserts/deletes. The cursor is an opaque token pointing to a position in the dataset. Each page includes the cursor for the next. This is the pattern Stripe, Shopify, and every serious API uses.

Keyset-based (?after_id=123&limit=20): Similar to cursor but with a visible key. Useful when consumers want to start from a known point.

Our recommendation: cursor-based for public APIs, with a standard response envelope:

{
  "data": [...],
  "pagination": {
    "next_cursor": "eyJpZCI6MTIzfQ",
    "has_more": true
  }
}

Authentication and rate limiting

API keys for server-to-server. Simple, functional. The key goes in the Authorization: Bearer sk_live_... header. Never in the URL (it gets logged by proxies and CDNs). Prefixing keys with the environment (sk_live_, sk_test_) prevents the classic mistake of using production in development.

OAuth 2.0 for delegated access. When a user authorizes an app to access on their behalf. More complex, but necessary for integrations where the end user matters.

Rate limiting explicit in headers. Always return X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset. Consumers can adapt their behavior without trial-and-error. When the limit is exceeded, return 429 with a Retry-After header.

Documentation as code

API documentation that lives outside the code gets stale. Always. Without exception. The only reliable documentation is generated from or validated against the OpenAPI spec.

Tools that work

  • Redocly: Generates static documentation from OpenAPI. Clean, customizable, CI/CD-friendly.
  • Scalar: Modern alternative to Swagger UI. Interactive API reference with executable examples.
  • Mintlify/Fern: Full documentation (guides + API reference) generated from OpenAPI + markdown. Ideal for public developer portals.

The minimum viable documentation

Each endpoint needs:

  1. Description of what it does (not how it’s implemented).
  2. Complete request example with realistic data (not "string" or "example").
  3. Response examples for the success case and for each error type.
  4. Code example in at least one language (curl + the most common language among your consumers).

What separates good documentation from mediocre documentation isn’t quantity. It’s the quality of examples. An example with realistic data ("tracking_number": "1Z999AA10123456784") is worth more than ten pages of abstract description.

Contract testing

The OpenAPI contract is useless if it’s not validated. Two levels:

CI validation: Tools like Spectral lint the spec against style rules. It’s ESLint for APIs: camelCase names, required descriptions, reused schemas. Configuring Spectral with a corporate ruleset standardizes design across teams.

Contract testing: Tools like Prism (from Stoplight) or Schemathesis generate random requests from the spec and verify the server responds according to the contract. Pact is the alternative for consumer-driven contracts between services.

Integrating both in your CI pipeline means no contract-breaking change reaches production. That’s what separates a professionally designed API from one that “works on my machine.”

API-first in practice: workflow

  1. Spec design in an OpenAPI file. Reviewed in a pull request like code. Consumers participate in the review.
  2. Mock generation with Prism or Mockoon. Frontend and external consumers start integrating against mocks.
  3. Server implementation against the spec. Automatic runtime validation.
  4. Contract testing in CI. Spectral for linting, Schemathesis for fuzzing.
  5. Documentation generated automatically. Published on every merge to main.
  6. Monitoring of real usage. Per-endpoint metrics (latency, error rate, volume) to detect what’s used, what breaks, and what to deprecate.

The workflow feels like more effort upfront. It is. But every hour invested in designing the contract well saves ten hours of debugging, alignment meetings, and integrator support. In an ecosystem with 5+ consumers, the difference is dramatic.

Final thought

API-first is a long-term bet. It costs more upfront. It demands discipline. It forces you to think about consumers before writing the first line of code. But organizations that practice it build systems that scale not just in traffic, but in the number of teams and partners that can integrate without everything falling apart. And in a world where integration is the norm, that capability is worth more than any individual feature.

About the author

A

abemon engineering

Engineering team

Multidisciplinary engineering, data and AI team headquartered in the Canary Islands. We build, deploy and operate custom software solutions for companies at any scale.