Skip to content

Idempotency

This guide explains how to implement idempotent operations with FLUID Network's API to prevent duplicate transactions and ensure reliable processing in distributed systems.

What is Idempotency?

Idempotency ensures that making the same request multiple times has the same effect as making it once. This is critical for payment processing where:

  • Network failures may cause retries
  • Timeouts can lead to duplicate requests
  • Distributed systems may process the same message twice
  • User behavior (double-clicking, multiple taps) can trigger duplicates

Without Idempotency: Multiple requests → Multiple transactions → Duplicate charges

With Idempotency: Multiple requests → Single transaction → Safe retries

Idempotency Keys

partner_reference Field

FLUID Network uses the partner_reference field as the idempotency key for all transaction operations:

json
{
  "amount": 5000,
  "currency": "GHS",
  "phone_number": "+233200123456",
  "partner_reference": "ORDER-2024-001", // Idempotency key
  "description": "Payment for Order #2024-001",
  "callback_url": "https://your-domain.com/webhooks/fluid"
}

Uniqueness Constraint

The partner_reference is unique per payment partner:

Unique Index: (payment_partner_id, partner_reference)

This means:

  • ✅ Same reference by different partners: Allowed
  • ❌ Same reference by same partner: Rejected with error code 3003

Example: Duplicate Prevention

Generating Idempotency Keys

Best Practices

  1. Use Meaningful Identifiers: Base keys on business entities (order IDs, invoice numbers)
  2. Ensure Uniqueness: Keys must be unique within your system
  3. Make Them Deterministic: Same operation should generate same key
  4. Include Context: Add relevant context to avoid collisions

Pattern 1: Order-Based Reference (RECOMMENDED)

  • Format: ORDER-{orderId}
  • Use case: E-commerce order payments
  • Example: ORDER-2024-001
  • Benefits: Clear traceability, business context preserved

Pattern 2: Invoice-Based Reference

  • Format: INV-{invoiceNumber}
  • Use case: Invoice payments, billing systems
  • Example: INV-2024-Q1-12345
  • Benefits: Links to existing invoice records

Pattern 3: Subscription Billing Reference

  • Format: SUB-{subscriptionId}-{billingPeriod}
  • Use case: Recurring subscription charges
  • Example: SUB-123-2024-01
  • Benefits: Prevents duplicate billing for same period

Pattern 4: UUID-Based Reference

  • Format: {prefix}-{timestamp}-{uuid}
  • Use case: One-off payments without business entity
  • Example: PAYMENT-1672531200000-a1b2c3d4
  • Benefits: Guaranteed uniqueness, good for ad-hoc transactions

Pattern 5: Composite Key Reference

  • Format: {prefix}-{customerId}-{sessionId}
  • Use case: Checkout flows, multi-step processes
  • Example: CHECKOUT-CUST456-SESSION789
  • Benefits: Links customer and session context

Anti-Patterns

Don't use these approaches:

Random Values - Not Idempotent

  • Problem: Generates different value each time
  • Example: Math.random().toString(36).substring(7)
  • Why bad: Retries will create duplicate transactions

Timestamp Alone - Not Unique

  • Problem: Multiple requests at same millisecond
  • Example: Date.now().toString()
  • Why bad: Race conditions create duplicates

Sequential Counters - Race Conditions

  • Problem: Distributed systems increment simultaneously
  • Example: counter++ or TXN-${counter}
  • Why bad: Multiple servers create same reference

User-Provided Values - Security Risk

  • Problem: Users can manipulate references
  • Example: Using req.body.user_reference directly
  • Why bad: Enables replay attacks, reference collisions

Handling Duplicate Requests

Error Response

When you send a duplicate partner_reference, you'll receive a 409 Conflict response:

json
{
  "error": {
    "code": 3003,
    "message": "Duplicate transaction reference",
    "category": "transactions",
    "details": {
      "partner_reference": "ORDER-2024-001",
      "existing_transaction_id": "txn_abc123",
      "existing_status": "completed"
    }
  }
}

Handling the Error

Error Handling Strategy:

  1. Wrap transaction initiation in try-catch block
  2. Check error response status code (409) and error code (3003)
  3. Extract existing_transaction_id from error details
  4. Fetch the existing transaction using transaction ID
  5. Return existing transaction as if it was just created
  6. Log duplicate detection for monitoring
  7. For other errors, rethrow to be handled by outer error handler

Implementation Logic:

  • On successful creation: Return new transaction with created: true flag
  • On duplicate error (3003): Fetch and return existing transaction with created: false flag
  • On other errors: Propagate error to caller

Key Points:

  • Treat duplicate as success scenario, not failure
  • Always fetch latest state of existing transaction
  • Preserve transaction ID for reconciliation
  • Include metadata to distinguish new vs existing transaction

Idempotent Retry Logic

Implementing Safe Retries

Combine idempotency with retry logic for resilient transaction processing:

Service Architecture:

  • Maximum retry attempts limit (e.g., 3 retries)
  • Exponential backoff delay calculation: 2^attempt × 1000ms
  • Retryable error detection logic
  • Duplicate error handling (3003)

Retry Flow:

  1. Attempt transaction initiation
  2. On success: Return result with created: true
  3. On duplicate error (3003): Fetch existing transaction, return with created: false
  4. On retryable error: Calculate exponential backoff delay
  5. Wait for delay duration (1s, 2s, 4s progression)
  6. Retry up to max attempts
  7. On non-retryable error or max retries: Throw error

Retryable Error Codes:

  • 1500 - Internal server error
  • 1503 - Service unavailable
  • 2001, 2002 - Connector errors
  • 2408, 2500, 2502 - Upstream provider errors
  • Network failures (no status code)

Non-Retryable Errors:

  • Validation errors (4xx except 409)
  • Authentication errors (401, 403)
  • Business logic errors (3xxx except 3003)

Benefits:

  • Handles transient failures automatically
  • Respects idempotency on duplicates
  • Progressive backoff reduces system load
  • Clear success/duplicate distinction in response

Webhook Idempotency

Deduplicating Webhook Events

Webhooks may be delivered multiple times. Use the event_id to deduplicate:

Implementation

Webhook Handler Logic:

  1. Extract X-Fluid-Signature header from request
  2. Verify HMAC signature using webhook secret
  3. Parse webhook payload to extract event_id
  4. Check if event_id exists in processed events store
  5. If exists: Return 200 OK immediately (duplicate, skip processing)
  6. If new: Add event_id to processed events store
  7. Process webhook event based on event_type
  8. On processing error: Remove event_id from store (allow retry)
  9. On success: Keep event_id in store permanently
  10. Return 200 OK to acknowledge receipt

Storage Options:

  • In-Memory Set/Map: Fast, but lost on restart (dev/testing only)
  • Redis: Persistent, fast lookups, TTL support (recommended for production)
  • Database: Most reliable, supports queries, use unique constraint on event_id

Error Handling:

  • Signature verification failure: Return 401 Unauthorized (no processing)
  • Duplicate event: Return 200 OK with duplicate: true flag
  • Processing error: Return 500 Internal Server Error, remove event_id (enables retry)
  • Success: Return 200 OK with received: true

Database-Backed Deduplication

For production systems, use database constraints for reliable deduplication:

Database Schema:

sql
-- Create webhook_events table with unique constraint
CREATE TABLE webhook_events (
  id SERIAL PRIMARY KEY,
  event_id VARCHAR(255) NOT NULL UNIQUE,
  event_type VARCHAR(100) NOT NULL,
  transaction_id VARCHAR(255),
  payload JSONB NOT NULL,
  processed_at TIMESTAMP NOT NULL,
  created_at TIMESTAMP DEFAULT NOW()
);

-- Index for fast lookups
CREATE INDEX idx_webhook_events_event_id ON webhook_events(event_id);
CREATE INDEX idx_webhook_events_transaction_id ON webhook_events(transaction_id);

Implementation Strategy:

  1. Attempt to insert event into webhook_events table
  2. Unique constraint on event_id prevents duplicate inserts
  3. On successful insert: Event is new, proceed with processing
  4. On unique constraint violation (PostgreSQL error 23505): Event already processed, skip
  5. Store full event payload in JSONB column for audit trail
  6. Index event_id for fast duplicate checks
  7. Index transaction_id for querying events by transaction

Error Handling:

  • Success: New event inserted, process event logic
  • Duplicate (error code 23505): Event already exists, return success without processing
  • Other errors: Database or processing failure, propagate error

Benefits:

  • Atomic duplicate detection (no race conditions)
  • Survives application restarts
  • Provides audit trail of all events
  • Fast lookups with indexes
  • JSONB column allows flexible querying

Idempotency in Distributed Systems

Handling Race Conditions

When multiple servers process the same request simultaneously:

Solution: Database Constraints

Rely on database-level unique constraints to prevent race conditions:

Implementation Strategy:

  1. Define unique constraint at database level: (payment_partner_id, partner_reference)
  2. Attempt to insert transaction with partner reference
  3. Database atomically checks uniqueness before insert
  4. On success: Transaction created, return with created: true
  5. On unique constraint violation (PostgreSQL error 23505): Race condition detected
  6. Fetch existing transaction using partner_id and partner_reference
  7. Return existing transaction with created: false

Key Points:

  • Database enforces uniqueness atomically (no application-level race conditions)
  • Constraint prevents duplicate inserts even under concurrent load
  • Application handles constraint violation gracefully
  • Fetch-after-violation pattern ensures correct transaction returned

Benefits:

  • Atomic uniqueness check (no time-of-check-time-of-use race)
  • Works across distributed application servers
  • No application-level locking needed
  • Database handles all concurrency control

Pessimistic Locking

For critical sections requiring guaranteed uniqueness, use database row locks:

Locking Strategy:

  1. Start database transaction with isolation level
  2. Lock the related record (e.g., order) using SELECT FOR UPDATE
  3. Check if payment already initiated on locked record
  4. If already initiated: Return existing transaction_id
  5. If not initiated: Call FLUID API to initiate payment
  6. Update locked record with transaction_id and payment flag
  7. Commit transaction (releases lock)

Lock Types:

  • FOR UPDATE: Exclusive lock, blocks other transactions
  • FOR UPDATE NOWAIT: Fail immediately if locked (don't wait)
  • FOR UPDATE SKIP LOCKED: Skip locked rows (useful for queues)

Benefits:

  • Prevents duplicate payment initiation for same order
  • Serializes concurrent attempts on same record
  • Ensures consistency between order state and API calls
  • Rollback on API failure maintains data integrity

Trade-offs:

  • Reduced concurrency (locks serialize access)
  • Potential for deadlocks with multiple locks
  • Longer transaction duration
  • Use only for critical operations requiring strict serialization

Testing Idempotency

Unit Tests

Test Cases for Idempotent Transaction Creation:

Test 1: Return Existing Transaction on Duplicate Reference

  • Create transaction with specific partner_reference
  • Verify first request creates new transaction (created: true)
  • Send second request with same partner_reference
  • Verify second request returns existing transaction (created: false)
  • Confirm both responses have same transaction_id

Test 2: Handle Concurrent Requests with Same Reference

  • Create transaction data with unique partner_reference
  • Send 3 simultaneous requests with same reference using Promise.all
  • Verify all 3 requests succeed (one creates, others return existing)
  • Confirm all 3 responses have identical transaction_id
  • Verify no duplicate transactions created in database

Webhook Deduplication Tests:

Test 3: Ignore Duplicate Webhook Events

  • Create webhook event with specific event_id
  • Mock event processing handler
  • Send first webhook delivery
  • Verify handler called once
  • Send second webhook delivery with same event_id
  • Verify handler still called only once (not twice)
  • Confirm duplicate logged appropriately

Integration Tests

Test idempotency with real API calls in sandbox environment:

Test Setup:

  • Use sandbox API endpoint
  • Generate unique partner_reference with timestamp
  • Include valid authentication token
  • Use test phone number from sandbox

Test Flow:

  1. First Request:

    • Send debit initiation with unique partner_reference
    • Verify response status is 200 OK
    • Extract and store transaction_id from response
  2. Second Request (Duplicate):

    • Send identical request with same partner_reference
    • Verify response status is 409 Conflict
    • Verify error code is 3003 (Duplicate Reference)
    • Verify error details include existing_transaction_id
    • Confirm existing_transaction_id matches first request's ID

Assertions:

  • First request succeeds with new transaction
  • Second request fails with duplicate error
  • Existing transaction ID returned in error matches original
  • No duplicate transaction created in system

Test Data:

  • Amount: 1000 (test amount in pesewas)
  • Currency: GHS
  • Phone: +233200000001 (sandbox test number)
  • Reference: TEST-IDEMPOTENCY-{timestamp}

Best Practices

✅ Do

  1. Use Meaningful References: Base on business entities (orders, invoices)
  2. Store References Early: Record partner_reference in your database before calling API
  3. Handle 409 Gracefully: Treat duplicate errors as success (fetch existing transaction)
  4. Deduplicate Webhooks: Use event_id to prevent duplicate processing
  5. Use Database Constraints: Enforce uniqueness at database level
  6. Test Concurrency: Verify behavior under race conditions
  7. Log Idempotency Events: Track when duplicates are detected

❌ Don't

  1. Don't Use Random Values: Not idempotent across retries
  2. Don't Ignore Duplicate Errors: Always handle error code 3003
  3. Don't Skip Signature Verification: Even for duplicate webhooks
  4. Don't Rely on Application Locks Only: Use database constraints
  5. Don't Process Webhooks Twice: Check event_id before processing
  6. Don't Use Sequential Counters: Race conditions in distributed systems
  7. Don't Expose Raw IDs: Hash or encrypt if necessary

Common Patterns

E-Commerce Order Payment

Pattern Overview: Generate deterministic reference from order ID to ensure idempotent order payments.

Implementation Steps:

  1. Generate Reference: Create reference from order ID: ORDER-{orderId}
  2. Store Reference Early: Update order with payment reference before API call
  3. Set Initial Status: Mark order as payment_status: 'initiating'
  4. Initiate Transaction: Call FLUID API with order details and partner reference
  5. Handle Success: Update order with transaction_id and status pending
  6. Handle Duplicate (3003):
    • Extract existing transaction ID from error details
    • Fetch existing transaction from API
    • Update order with correct transaction ID and status
    • Return transaction (treat as success)
  7. Handle Other Errors: Propagate error to caller

Key Benefits:

  • Order ID provides natural idempotency key
  • Reference stored before API call prevents loss
  • Duplicate handling ensures order consistency
  • Transaction ID linked to order for tracking

Critical Points:

  • Always store reference in database first
  • Handle duplicate error gracefully (not as failure)
  • Fetch latest transaction state on duplicate
  • Update order record with transaction details

Subscription Billing

Pattern Overview: Generate period-specific reference to prevent duplicate billing for same subscription period.

Implementation Steps:

  1. Generate Period Reference: Create reference: SUB-{subscriptionId}-{billingPeriod}
    • Example: SUB-123-2024-01 for January 2024 billing
  2. Check Existing Charge: Query database for charge with same subscription + period
  3. If Already Charged: Return existing charge record (skip API call)
  4. Create Charge Record: Insert charge record with reference as idempotency guard
  5. Set Initial Status: Mark charge as initiating
  6. Initiate Transaction: Call FLUID API with subscription details
  7. Handle Success: Update charge with transaction_id and status pending
  8. Handle Duplicate (3003):
    • Extract existing transaction ID from error
    • Update charge record with transaction ID
    • Return charge record
  9. Handle Other Errors:
    • Delete charge record (allow retry)
    • Propagate error to caller

Key Benefits:

  • Prevents double-billing for same period
  • Subscription ID + period creates unique reference
  • Database check prevents unnecessary API calls
  • Charge record acts as idempotency guard

Critical Points:

  • Always check for existing charge first
  • Create charge record before API call
  • Include billing period in reference
  • Delete charge record on non-duplicate errors
  • Use unique constraint on (subscription_id, billing_period)

Quick Reference

Idempotency Checklist

  • ✅ Use meaningful partner_reference values
  • ✅ Generate references deterministically
  • ✅ Store references in your database
  • ✅ Handle error code 3003 gracefully
  • ✅ Deduplicate webhooks using event_id
  • ✅ Use database constraints for uniqueness
  • ✅ Test concurrent requests
  • ✅ Log duplicate detections
  • ✅ Implement retry logic with idempotency
  • ✅ Verify behavior under network failures

Error Code Reference

CodeErrorMeaningAction
3003Duplicate Referencepartner_reference already usedFetch existing transaction

Support

Questions about idempotency?