Adtech
StationOps designed and delivered a multi-region AWS infrastructure platform for Auth.inc’s adtech stack — ad server, web application, real-time analytics, and event streaming — from initial requirements to production in 12 weeks.
Engagement
12 weeks, infrastructure delivery
Team
Cloud architect, Platform engineer, DevOps lead
Focus areas
Multi-region infrastructure, IaC, data pipelines, CI/CD
3
AWS regions serving ad traffic via geolocation routing
6
Infrastructure codified as CloudFormation IaC
100%
Core services delivered production-ready on AWS
12 wks
From requirements to full production deployment
Situation
Auth.inc is building a programmatic advertising platform that connects publishers with demand-side buyers through real-time auction infrastructure. As the publisher network grew, the team needed production-grade AWS infrastructure across multiple geographies — but lacked the platform engineering capacity to design, build, and operate it in-house.
| Component | Stack | Specification |
|---|---|---|
| Ad Server | NodeJS / Fastify | ≥ 2 × t3.xlarge per region, low-latency geo-routing, ALB with HTTPS, redundant instances. |
| Web App | Next.js + Express | Managed PostgreSQL with daily backups and 14-day retention, ALB with HTTPS termination. |
| Analytics | Apache Pinot + Zookeeper | Full cluster redundancy, restorable state, zero data loss tolerance. |
| Platform | — | Multi-region (Europe & Latin America), monorepo CI/CD, 100% Infrastructure as Code. |
What we did
01 Designed the cloud architecture
Mapped each workload to the right AWS service: ECS (EC2 where SSH access was needed, Fargate where it wasn’t) for the ad server and web app, EKS with StatefulSets and EBS volumes for Pinot and Zookeeper across three AZs, Aurora PostgreSQL with 14-day backup retention, and ALBs with NAT gateways securing the network layer. As business needs evolved, the architecture expanded from one region to three — eu-west-1 (Ireland), eu-west-2 (London), and sa-east-1 (São Paulo) — with Route 53 geolocation routing via a single hostname (platform.auth.inc).
02 Built the multi-region platform
Deployed AdServer (EC2) and Prebid (Fargate) clusters in London and São Paulo behind ALBs in private subnets. In Ireland: WebApp UI, WebApp API, Admin UI, and Admin API as separate ECS services on EC2, backed by Aurora PostgreSQL. Kafka on AWS MSK with public access for cross-region connectivity, IAM auth for AdServer and SASL for Pinot. Auction and event data flows through Kafka into Pinot on EKS (StatefulSet, EBS storage, S3 deep storage, daily Zookeeper snapshots via Lambda), exposed through an internal ALB.
03 Codified everything as IaC with CD
Every resource — VPCs, subnets, security groups, ECS clusters, ALBs, NAT gateways, Aurora, MSK — codified as CloudFormation templates via the StationOps platform (exceptions: Pinot via Helm, Route 53 geolocation records). For CD, built an Azure Pipelines workflow: developers merge to pipeline_to_ecr, parallel builds push images to ECR, and AWS deployments roll out automatically across all three regions.
Auth.inc — delivery summary
The engagement covered architecture through CD: ECS (EC2 and Fargate), EKS for Pinot/Zookeeper, Aurora PostgreSQL, MSK Kafka, Route 53 geolocation, CloudFormation across three regions, and Azure Pipelines to ECR with automated multi-region rollout.
Below is the full published narrative: workload requirements, what we built, named deliverables, week-by-week timeline, ROI comparison, outcomes, and customer quote.
Deliverables
- Multi-region ad-serving platform — AdServer (EC2) and Prebid (Fargate) on ECS across London and São Paulo with Route 53 geolocation routing.
- WebApp & Admin stack — Next.js frontend, Express API, and Admin UI/API on ECS in Ireland with Aurora PostgreSQL (daily backups, 14-day retention).
- Real-time analytics cluster — Apache Pinot on EKS with StatefulSets, EBS volumes, S3 deep storage, and Zookeeper, exposed via internal ALB.
- Event streaming pipeline — Kafka on AWS MSK in Ireland with IAM and SASL authentication and one-day topic retention.
- Infrastructure as Code — CloudFormation templates covering VPCs across three regions, subnets, ALBs, NAT gateways, and all compute and data services.
- CD pipeline — Azure Pipelines to ECR to AWS automated deployment across the monorepo.
Technologies & architecture (at a glance)
AWS ECS on EC2 and Fargate, EKS with StatefulSets and EBS, Aurora PostgreSQL, Application Load Balancers, NAT gateways, AWS MSK (Kafka), Route 53 geolocation routing, S3 deep storage for Pinot, AWS Lambda for Zookeeper snapshots, IAM and SASL auth patterns, CloudFormation (plus Helm for Pinot), Azure Pipelines for CI/CD to Amazon ECR and automated AWS deployments.
Timeline
Twelve weeks, requirements to production across three AWS regions. Scope expanded mid-engagement — a second ad-serving region, Prebid clusters, and Kafka — without extending the timeline.
Weeks 1–3 Requirements, architecture & networking
Gathered detailed application requirements from the Auth.inc team, produced the initial infrastructure proposal and architecture diagram, established VPC design (public and private subnets, NAT gateways, jump hosts) across three regions, and provisioned the foundational networking layer.
Weeks 4–6 Core platform build
Deployed ECS clusters for AdServer and WebApp services, stood up Aurora PostgreSQL with backup policies, provisioned EKS for the Pinot and Zookeeper StatefulSets, and configured Application Load Balancers and security groups across all regions.
Weeks 7–9 Data layer & multi-region expansion
Deployed Kafka on MSK with cross-region connectivity, configured IAM and SASL authentication, stood up Prebid (Fargate) clusters in London and São Paulo, wired Route 53 geolocation routing, and integrated the auction data pipeline from AdServer through Kafka into Pinot.
Weeks 10–12 IaC codification, CD pipeline & handoff
Codified the full estate as CloudFormation templates, built the Azure Pipelines CD pipeline for zero-touch deployments, ran end-to-end testing across all regions, performed security hardening, and handed off operational documentation to the Auth.inc team.
Impact & ROI
Auth.inc went from application requirements and zero cloud infrastructure to a fully operational, multi-region AWS platform in twelve weeks. Building this internally would have meant hiring or reassigning senior platform engineers for the better part of a year — and delaying the product launch that depended on it.
The comparison below shows the typical internal path versus the StationOps engagement.
| Dimension | Typical internal, manual build | With StationOps engagement |
|---|---|---|
| Timeline | 6–9 months to design, build, and harden a multi-region platform with IaC and CD. | 12 weeks — requirements to production across 3 AWS regions, scope expanded mid-flight. |
| Engineering effort | 5–10 person-months (≈ 800–1,600 hours) of senior/platform engineers pulled off product work. | Auth.inc’s engineering team stayed on product — zero roadmap disruption. |
| Fully-loaded cost | Roughly €80k–€200k in engineering time, before opportunity cost of delayed product launch. | Engagement delivered a production platform faster and cheaper than the internal alternative. |
Figures shown are typical ranges for comparable work and will vary by baseline maturity, constraints, and team size.
- Multi-region ad serving live — publisher traffic routed to the nearest cluster via geolocation; Europe and Latin America served from dedicated infrastructure.
- Zero-touch deployments — developers ship to production across all three regions by merging a single branch — no manual steps, no region-by-region coordination.
- Fully reproducible infrastructure — every resource codified in CloudFormation: replicate in a new region, audit through version control, onboard engineers without tribal knowledge.
- Real-time analytics operational — auction and event data flows from AdServers through Kafka into Pinot, queryable by the WebApp API in near real time.
“StationOps took our requirements and delivered a production-ready multi-region platform we couldn’t have built this fast ourselves. The infrastructure just works — and we can manage it going forward.”
Need infrastructure that scales?
We design, build, and codify production-grade AWS platforms so your team can focus on product — not infrastructure plumbing.
Related case studies
Assiduous
How StationOps delivered a six-account Control Tower Landing Zone, SLO-based operations, and ongoing managed AWS for an AI-enabled corporate finance platform — in weeks instead of months.
DigiPro
How StationOps helped DigiPro cut incidents, speed up safe releases, and reclaim engineering time — with SLOs, observability, CI/CD guardrails, and cost visibility in twelve weeks.
Flexiwage
How StationOps improved payroll pipeline availability, automated compliance evidence, cut MTTR and cloud spend, and doubled safe deploy frequency for Flexiwage in fourteen weeks.
SimpleCGT
How SimpleCGT reached 99.9% uptime through filing season, cut P1/P2 incidents and infra cost, and embedded observability, SLOs, and governance in four weeks.




