Lee Wynne
AWS at Scale: Why Enterprise AWS Is a Completely Different Discipline

I’m starting this series because there’s a gap. A big one. There’s plenty of content out there on how to use AWS. Tutorials on spinning up an EC2 instance, deploying a Lambda function, wiring up an API Gateway. That content is valuable, and it helps people get started. But it stops well short of the thing I’ve spent almost my entire career doing, designing and building AWS at scale inside large corporate enterprises where the problems aren’t technical in isolation, they’re organisational, political, architectural and deeply human.
This is AWS at Scale. It’s a place to share strategies, patterns and hard-won lessons on how to build your AWS career, how to make an impact in a large enterprise, and how to design cloud platforms that actually work when hundreds of people are building on them simultaneously.
A Bit About Me
I’m Lee. I’ve spent my career working as an architect designing at scale in some of the world’s largest corporations. I’m currently at Informa, where we operate at the heart of the knowledge and information economy, connecting people through intelligence, academic publishing, knowledge and events.
I help shape the technical architecture for a £4bn portfolio that includes The AI Summit, Black Hat, London Tech Week, Aviation Week, and Game Developers Conference (plus many other Informa brands) with 20+ yrs experience delivering large-scale enterprise programmes.
If you want to learn more about what I do, you can find me on LinkedIn.
What Does AWS at Scale Actually Mean?
This is the question that matters, and most people get it wrong because they think scale is about the number of EC2 instances you’re running or how many requests per second your API handles. It’s not. Or rather, it’s not just that.
For me, AWS at Scale means consistent governance, compliance, FinOps practices and common developer and DevOps experiences are delivered through automation and vending. It means the ability to provide all of this as a service to your project teams, your workload teams, your builders, through an architectural engagement and ITSM process that covers all the things your community of builders maybe haven’t considered because it’s not part of their domain of responsibility.
Your consumers may not see the bigger picture at play. They just need to focus on building. And that’s exactly the point. A well-designed platform at scale means that developers and DevOps teams can start building within minutes of receiving their environment, with a consistent experience from product to product.
The hard part isn’t the technology. The hard part is designing the organisational model, the engagement process, the vending pipelines, the governance automation, and the cultural shift that makes all of this work without creating bottlenecks or shadow IT. That’s what this series is about.
What I’ll Be Writing About
This is the first post of many, and the topics I’ll be covering are the things I’ve either built, broken, rebuilt, or argued about in boardrooms over the years. Here’s what’s coming.
The Provider to Consumer Model. This is the foundational pattern. A shared responsibility framework that gives the platform team clear ownership of foundational infrastructure while giving product teams a fast, governed path to build and deploy. I’ll cover reusable Infrastructure as Code modules, reusable CI/CD pipelines, and reusable design patterns and reference architectures.
Core Foundations, Landing Zone and Control Plane. An SDLC mindset applied at every layer. Development landing zones, consumer landing zones, account-level segregation across dev, test and production, sandbox environments, and the network architecture that ties it all together. East-west inspection VPCs, centralised egress inspection, and why you need to rip out VPC peering before it becomes a liability.
Mandatory Tagging from Sources of Truth. Instead of asking consumers to tag their resources, the Provider delivers mandatory tags as part of the account vending pipeline. I’ll cover how to flow mandatory tags down to resources, how to provide recommended resource-level tagging guidance, and how to tag storage resources with the appropriate data classification.
Privileged Access Models. Request and approval for time-bound AWS console and CLI access. Building roles for break-fix, view-only, read-only, Session Manager and Secrets Manager access. Multi-stage approval MFA-based processes for root IAM account access, and removing local IAM accounts entirely.
FinOps at Scale. Discovering and setting estimated AWS account-level budgets during the engagement process. Automated spend dashboards and trend identification. Visualising and approving budget changes when commits are made to the infrastructure as code pipeline. Automated enrolment into AWS Private Marketplace with a request and approval process. Ensuring all non-prod compute runs on spot and all non-prod storage is tiered appropriately.
Reference VPC Architecture. Industrialised for Provider management at scale, designed for automated vending, standardisation and reusability, and fit for the future of microservices and serverless.
The Architectural Engagement Process. Data gathering for functional and non-functional requirements. Outputs including early stakeholder comms, schematics and enablement guides. Why it’s important to be opinionated, decisive and confident. And how to reduce snowflakes by building cattle, not pets.
AI on AWS at Scale. Every enterprise is racing to adopt AI, and most are doing it without the platform thinking that made their core workloads manageable. I’ll cover how to build AI applications on AWS through the same Provider to Consumer lens, including Bedrock and SageMaker governance, model access guardrails via Private Marketplace, SCPs and IAM, prompt logging and observability, cost controls for inference spend that can spiral without warning, and how to vend AI-ready account patterns that give data science teams what they need without bypassing the security, compliance and architectural standards you’ve spent years building.
Career and Culture. Making the right career choices. Tips on getting hired as an AWS professional in a large corporate enterprise. Interview techniques, what the hiring manager is actually looking for, and how to stop perceiving AWS as a data centre.
Why This Matters
There’s a reason enterprises struggle with cloud at scale and it’s not because the technology is hard. It’s because the organisational design is hard. The governance model is hard. Getting hundreds of developers to work consistently without slowing them down is hard. And most of the content out there doesn’t touch this because most of the people writing it haven’t lived it.
I have. For years. Across multiple large enterprises. And the patterns I’ll share in this series aren’t theoretical. They’re battle-tested in environments where getting it wrong means production incidents at 2am, compliance findings that land on the CTO’s desk, and FinOps reports that make the CFO ask uncomfortable questions.
If you’re already on your AWS journey, this series is about levelling up. If you’re an architect, a platform engineer, a DevOps lead, or anyone who’s been asked to “make AWS work” for an organisation that’s outgrown its initial cloud adoption, then this is for you.
AWS is an incredible community to be part of. The skills are beyond portable, every business is consuming AWS either directly or indirectly. The career opportunities are real, the salary is good (and in some cases very good), and there’s a never-ending supply of new things to learn.
The juice is worth the squeeze. Let’s get into it.