← All posts

Internal and External API Isolation with Dual Load Balancers

smplkit product microservices serve two audiences from the same application process: customer SDKs calling public API endpoints, and internal smplkit services calling private coordination endpoints. The challenge is keeping internal endpoints unreachable from the internet without requiring the application to make security decisions about network boundaries.

The Problem

Each product service (Smpl Config, Smpl Flags, etc.) exposes two categories of API:

External APIs (/api/v1/*) — called by customer applications and SDKs. Authenticated via API keys. Serve the product’s core runtime functionality.

Internal APIs (/internal/v1/*) — called by other smplkit services within the VPC. Handle provisioning, introspection, and cross-service coordination. Must never be reachable from the public internet.

Both sets of endpoints are served by the same FastAPI application running in ECS. The question is how to enforce the boundary.

Application-Level Enforcement Is Fragile

The obvious approach is middleware that checks the request origin. If the request came from the internet, block /internal/* paths. If it came from inside the VPC, allow everything.

This is fragile because it depends on every endpoint being correctly decorated. A missing middleware, a misconfigured route, or a decorator that gets accidentally removed during a refactor silently exposes internal endpoints to the internet. The security boundary depends on the application being bug-free — which is not a realistic assumption.

Dual Load Balancers

Instead, each product service is fronted by two Application Load Balancers pointing at the same ECS target group:

Public ALB — internet-facing, attached to public subnets. Listener rules forward only /api/v1/*. All other paths get a 404. This is what customer SDKs reach via the product’s public domain (e.g., config.smplkit.com).

Internal ALB — not internet-facing, attached to private subnets. No public DNS. Forwards all paths including /internal/v1/*. Accessible only from within the VPC via Route 53 private hosted zones (e.g., config.internal.smplkit.local).

Both ALBs point at the same ECS target group. The application runs once and serves all routes. The ALBs control which routes are reachable from which network.

Why This Works

The public ALB physically cannot forward a request to /internal/v1/*. It’s not a software configuration that can be bypassed — the listener rules simply don’t match those paths. An attacker who discovers an internal endpoint path gets a 404 from the ALB before the request ever reaches the application.

The internal ALB has no public IP address. It’s not internet-facing. DNS resolution for config.internal.smplkit.local only works inside the VPC. From outside the VPC, the ALB doesn’t exist.

This means a bug in the application — a missing auth check, a misconfigured route — cannot expose internal endpoints to the internet. The network topology prevents it.

No Compute Duplication

A common alternative is running separate ECS services for internal and external traffic. This provides the strongest isolation — separate processes, separate scaling, separate deployment — but doubles the compute cost and operational complexity for every product service.

The dual-ALB approach gets 95% of the isolation benefit with zero compute duplication. Both ALBs route to the same target group. One ECS service, one deployment, two entry points with different network visibility.

Service Discovery via Private Hosted Zones

Internal services find each other using Route 53 private hosted zones. The app service calls Smpl Config’s internal API at config.internal.smplkit.local, which resolves to the internal ALB’s private IP address — but only from within the VPC.

This gives internal services stable, human-readable DNS names without a service registry, hardcoded IPs, or environment variables pointing to ALB DNS names. Adding a new product service means adding a private hosted zone record, and every other service can reach it immediately.

Developer Access

Internal ALBs aren’t reachable from the internet, but developers still need access for troubleshooting. We solve this with SSM Session Manager port forwarding through an existing bastion instance. A developer can forward a local port to the internal ALB and access FastAPI’s Swagger UI at /docs — without the internal ALB ever being exposed to the internet.

The Pattern

Two ALBs per service. Public for customers, internal for coordination. Network topology enforces the boundary. The application doesn’t need to know which ALB a request arrived through.

This is the kind of infrastructure decision that’s invisible when it’s working. You notice it only when it prevents the class of bugs it was designed to prevent — which, ideally, is never.

When to Use This Pattern

The dual-ALB approach makes sense when your service serves both external and internal traffic and you want network-level isolation without running separate deployments. If your internal traffic is negligible (a single health check from a scheduler, for example), a simpler approach — like an IP-restricted security group rule — might suffice.

As the number of product services grows, the ALB count grows too (two per service). AWS ALB pricing is modest — roughly $16/month base cost per ALB plus traffic charges — but it’s a cost that scales linearly with the number of services. For smplkit’s current service count, this is well within budget. If we ever reach dozens of services, we’d evaluate alternatives like a shared internal ALB with path-based routing to multiple target groups.

For now, the simplicity of one-to-one service-to-ALB mapping is worth the marginal cost. Each service’s infrastructure is self-contained and independently manageable.

smplkit uses dual load balancers to isolate internal APIs from public traffic. Learn about our architecture at docs.smplkit.com.