Franchise Banking Operations Platform

A multi-tenant SaaS platform serving franchise banking operations — built independently on a serverless AWS architecture, processing $18M+ in transactions with sub-second incident detection and automated alerting that cut average response time by a full day.

JavaSpring BootAstroReactTypeScriptAWS LambdaAPI GatewayS3RDSPostgreSQL

The franchise banking model has a particular operational complexity: each franchise location operates with some autonomy, but the parent institution needs consolidated visibility, compliance reporting, and incident management across all of them. Most off-the-shelf tools either serve the individual location or the holding company — not both, coherently.

I was brought in to design and build a platform from scratch. The brief was concrete: replace a manual monitoring workflow with automated detection, consolidate reporting that currently required spreadsheet assembly, and do it in a way that scales as new franchise locations are added.

The architecture decision: serverless-first

The operational pattern of franchise banking creates a specific load profile: irregular, event-driven activity rather than sustained throughput. Transactions cluster around business hours; compliance reports run on regulatory schedules; incident alerts need to fire immediately when triggered but may be idle for hours between triggers.

This is a good fit for a serverless architecture. Each bounded function — transaction processing, anomaly detection, report generation, notification dispatch — runs as an independent Lambda, invoked by events and scaled to zero when idle. The operational cost of that idle time is zero; there is no fleet of EC2 instances to right-size.

// Transaction processing Lambda — stateless, idempotent
@Override
public APIGatewayProxyResponseEvent handleRequest(
    APIGatewayProxyRequestEvent input,
    Context context
) {
    TransactionRequest request = deserialize(input.getBody());

    // Idempotency key prevents double-processing on retry
    if (transactionStore.exists(request.getIdempotencyKey())) {
        return response(200, "Already processed");
    }

    ProcessingResult result = processor.process(request);
    transactionStore.record(result);
    eventBus.publish(new TransactionProcessedEvent(result));

    return response(200, result);
}

API Gateway sits in front of the Lambda layer and handles routing, authentication (JWT verification against the platform’s identity provider), and rate limiting per tenant. S3 stores documents, compliance artefacts, and generated reports. RDS (PostgreSQL) holds the relational data: transaction records, account structures, incident history, audit logs.

Multi-tenancy

Every franchise location is a tenant with its own data partition, its own user base, and its own set of notification endpoints. The platform enforces tenant isolation at the data layer — a query that could leak data across tenant boundaries is a critical bug, not a performance issue.

The tenant model also drives the consolidation view available to the parent institution: authorised users can see aggregate metrics, compliance status, and incident history across all locations in a single view, while location-level users see only their own data.

The incident detection layer

The feature that cut response time by a day is the anomaly detection pipeline. Each transaction passes through a set of configurable rules — thresholds, velocity checks, pattern matching against historical baselines — as it’s processed. Rules that fire create an incident record and immediately invoke a notification Lambda that dispatches to the configured channels: email, SMS, webhook.

Before the platform, detecting a problematic transaction pattern required someone to notice it in a report the following morning. With real-time detection, the alert fires within seconds of the triggering event.

The rules are configurable per tenant, which matters because a “suspicious” transaction for a high-volume urban branch looks different from the same transaction at a low-volume rural location. The platform stores a rolling baseline per tenant and adjusts thresholds accordingly.

The frontend

The operational interface is built on Astro with React component islands for the interactive surfaces (dashboards, incident management, transaction search). Astro’s partial hydration model was a deliberate choice: most of the pages in the platform are read-heavy — reports, dashboards, audit logs — and don’t need a full SPA runtime. Interactive components load their JavaScript; static views ship as HTML.

This kept the Time to First Contentful Paint under 800ms for the most-visited pages, which matters for an application that franchise staff use constantly throughout their working day.

Scale and compliance

The platform has processed $18M+ in transactions across the franchise network since launch. Audit logs are append-only, cryptographically signed, and retained per the applicable regulatory schedule. The compliance reporting surface generates the required regulatory reports directly from the transaction ledger rather than from aggregated summaries — meaning the numbers in the reports can always be traced to the underlying records.

Building financial infrastructure independently and from scratch is a different kind of accountability than building internal tooling. The correctness of the transaction ledger is not negotiable. The audit trail has to hold up to examination. Every deployment is preceded by a full regression suite against a production-mirrored environment.