Capital One is one of the largest banks in the United States, known for its technology-forward approach. They were early adopters of cloud computing, running much of their infrastructure on Amazon Web Services. In many ways, they were a model for how traditional financial institutions could modernize.
Then, in July 2019, they announced that a hacker had stolen personal information from over 100 million customers and credit card applicants. Social Security numbers, bank account numbers, credit scores, transaction data—sensitive financial information that people trusted a bank to protect.
The breach cost Capital One over $270 million in settlements, fines, and remediation. But the technical details of how it happened are what make it truly instructive.
The Attacker
Paige Thompson was a former Amazon Web Services employee who had worked on the very infrastructure Capital One was using. She knew AWS systems intimately—how they were configured, where the vulnerabilities might be, what security controls could be bypassed.
Thompson didn't use any sophisticated malware or zero-day exploits. She used her knowledge of cloud infrastructure to find and exploit misconfigurations that Capital One's security team had overlooked.
And then she did something that would ultimately lead to her arrest: she bragged about it. On Slack channels and Twitter, Thompson discussed the hack openly. A security researcher noticed the posts and reported them to Capital One. The FBI arrested Thompson at her home in Seattle two days after Capital One received the tip.
The Technical Breakdown
The breach exploited a vulnerability called Server-Side Request Forgery (SSRF). Here's how it worked:
Capital One had a web application firewall (WAF) running on their AWS infrastructure. This firewall was supposed to protect their applications from attacks. But it was misconfigured in a way that allowed an attacker to make requests from the firewall to other AWS services.
AWS instances have a metadata service accessible at a special IP address (169.254.169.254). This service provides information about the instance, including temporary security credentials. Normally, access to this metadata service is restricted. But Capital One's firewall could reach it.
Thompson exploited this by:
- Sending specially crafted requests to Capital One's WAF
- Using the WAF to query the AWS metadata service
- Obtaining temporary security credentials from the metadata service
- Using those credentials to access Capital One's S3 storage buckets
- Downloading over 700 folders containing customer data
The entire attack chain relied on Capital One's infrastructure being configured in a way that made this possible. The WAF shouldn't have had access to the metadata service. The IAM roles shouldn't have had such broad permissions. The S3 buckets shouldn't have been accessible with those credentials.
What Was Stolen
Thompson downloaded approximately 30 gigabytes of compressed data. When expanded, this included:
- Names, addresses, zip codes, phone numbers, and email addresses for 100 million US and 6 million Canadian customers
- Self-reported income, credit scores, credit limits, and payment history
- 140,000 Social Security numbers
- 80,000 linked bank account numbers
- 1 million Canadian Social Insurance Numbers
- Transaction data from 23 days in 2016, 2017, and 2018
For a bank, this is about as bad as it gets. Financial data, identity information, and account details—exactly the information that enables identity theft and financial fraud.
The Cost
The financial impact was substantial:
- $80 million fine from the Office of the Comptroller of the Currency
- $190 million class-action settlement
- Ongoing costs for customer notification, credit monitoring, and security improvements
- Total costs exceeding $270 million
Beyond the direct costs, Capital One faced intense scrutiny of their cloud security practices. They had marketed themselves as a technology leader; the breach called that image into question.
Cloud Security Lessons
The Capital One breach became a defining case study for cloud security. Here are the key lessons:
Cloud Misconfiguration Is the Number One Risk
Studies consistently show that the vast majority of cloud security incidents stem from misconfigurations—not sophisticated attacks, not zero-days, just settings that were wrong. AWS, Azure, Google Cloud—they all provide the tools for secure configuration. The responsibility for using them correctly falls on the customer.
Capital One's WAF shouldn't have had the access it did. Their IAM roles were too permissive. Their S3 buckets were accessible from places they shouldn't have been. Each of these was a configuration decision.
The Metadata Service Is a Common Target
The AWS metadata service at 169.254.169.254 is a frequent target for cloud attacks. It provides temporary credentials that can be used to access other AWS resources. Protecting access to this service is critical.
AWS now offers IMDSv2, which requires a token to access the metadata service. This makes SSRF attacks against the metadata service much harder. If you're running on AWS, use IMDSv2.
Least Privilege Isn't Optional
The credentials Thompson obtained had access to over 700 S3 buckets. That's an enormous blast radius from a single compromised credential. IAM roles should have the minimum permissions necessary—nothing more.
Review your IAM policies regularly. Ask: does this role really need access to all these resources? Can we restrict it further? What's the damage potential if this credential is compromised?
Web Application Firewalls Need Their Own Security
Ironically, the security tool (the WAF) became the attack vector. The firewall was protecting applications but wasn't properly locked down itself. Security tools are part of your attack surface. They need to be configured securely too.
Monitor for Unusual Activity
Thompson accessed 700+ folders of data. This level of data exfiltration should trigger alerts. Cloud providers offer tools like AWS CloudTrail and GuardDuty that can detect unusual access patterns. Use them.
Applying This to Your Business
Even if you're not running a bank's worth of cloud infrastructure, the principles apply:
Audit your configurations. Whether it's AWS, Azure, a WordPress installation, or any other system—review the security settings. Are permissions tighter than they need to be? Are there access paths that shouldn't exist?
Follow least privilege. Every user, every application, every service account should have only the access it actually needs. When in doubt, restrict first and expand later if necessary.
Enable monitoring. You can't respond to what you can't see. Log access, watch for anomalies, set up alerts for suspicious activity.
Keep security tools secure. Your firewall, your monitoring systems, your admin panels—these are attack surfaces too. Protect them accordingly.
The Bottom Line
Capital One was considered a technology leader. They had dedicated security teams, substantial budgets, and cutting-edge infrastructure. And they still got breached because of configuration mistakes.
The cloud doesn't make you less secure, but it doesn't automatically make you more secure either. It gives you powerful tools—and the responsibility to use them correctly. Misconfigurations are the new unpatched vulnerabilities, and they're just as dangerous.
Security in the cloud is a shared responsibility. The provider secures the cloud; you secure what you put in it.



