The Compliance Misconception That Slows Teams Down
The most expensive mistake regulated-industry teams make: treating compliance as a constraint on engineering rather than a design requirement. This leads to building first and auditing later, discovering gaps at the worst possible time — during a customer security review, an audit, or an incident.
The engineers who move fast in regulated environments have internalized a different model: compliance controls are engineering requirements, and you build them in from the start, the same way you build in observability or reliability.
What the Major Frameworks Actually Require
This is where compliance theater begins: teams imagine compliance requirements are more specific than they are.
SOC2 doesn’t tell you what technology to use. It’s about demonstrating that you have controls around security, availability, processing integrity, confidentiality, and privacy. “We use AWS with appropriate IAM policies, encryption at rest and in transit, and we have a process for reviewing access” is SOC2-compliant. The framework requires evidence of controls, not specific implementations.
HIPAA requires technical safeguards (access controls, audit controls, integrity controls, transmission security) but deliberately avoids mandating specific technologies, because the law was designed to be durable across technological change. “We don’t allow PHI in logs” is a HIPAA requirement. “You must use this specific logging solution” is not.
GDPR is about data rights and minimization. The requirements that matter most from an engineering standpoint: data subjects have rights of access, erasure, and portability; you must have legal basis for processing; data should be minimized (collect only what you need, retain only as long as necessary); and breaches must be reported within 72 hours.
Understanding that these frameworks require outcomes, not implementations, gives you engineering flexibility while maintaining compliance.
The Compliance Theater Trap
Compliance theater: doing things that look compliant but don’t reduce actual risk.
Examples:
- A penetration test report that gets filed and forgotten rather than driving remediation
- A password rotation policy that requires employees to change passwords every 90 days (which research shows reduces security by encouraging weak, predictable patterns)
- An extensive security questionnaire process for vendors where answers aren’t actually verified
- Encrypting data at rest but logging decrypted values in application logs
The antidote is asking for every control: “what risk does this reduce, and how would we detect if it failed?” If you can’t answer that, the control is theater.
Building Compliance Into Engineering Processes
Privacy by design: data minimization decisions should happen in design review, not at audit time. When you’re designing a new feature, the question “what data does this require and how long do we retain it?” should be standard. Add it to your design doc template.
Security in CI/CD: static analysis (Semgrep, Snyk) for common security issues, dependency vulnerability scanning, secrets detection (git-secrets, TruffleHog) — these catch issues before they reach production. The compliance value is twofold: you fix problems earlier and you have evidence of controls operating.
Automated evidence collection: Drata, Vanta, and Secureframe connect to your cloud providers and services, continuously collect evidence (access reviews, encryption status, configuration checks), and produce audit-ready reports. The manual evidence collection process that used to take weeks of engineering time for an audit is largely automated.
Audit trails by default: for anything touching sensitive data, structured logging of who accessed what and when, with logs shipped to immutable storage. This is an engineering habit, not a feature.
What Moving Fast Actually Looks Like in Regulated Environments
Stripe handles payment card data under PCI-DSS and moves fast. Plaid is regulated as a financial data aggregator and iterates quickly. Oscar Health operates under HIPAA and ships continuously.
The pattern: they’ve invested in the infrastructure of compliance — automated controls, continuous monitoring, clear data handling policies — so that individual engineers don’t have to navigate compliance from scratch on every feature.
Fast-moving teams in regulated environments typically have:
- A clear data classification policy (what’s sensitive, how it’s handled)
- Paved paths for common compliance requirements (libraries for encrypting specific data types, standard patterns for audit logging)
- A compliance team that’s a partner in design review, not an auditor at the end
Common Engineering Shortcuts That Create Compliance Risk
- Logging PII in error messages:
logger.error("Failed to process user {user_email}")— this is the most common GDPR/HIPAA violation in codebases - Overly broad data retention: keeping everything “in case we need it” — GDPR specifically requires retention policies; “we don’t know” is not a policy
- Missing audit trails: especially for administrative actions, data exports, and anything touching PHI
- Hardcoded credentials and API keys: trivial to prevent (secrets management), frequently not done
- Test data populated with real PII: production data used in dev/test environments is a common compliance gap
The Organizational Reality
The compliance team that’s adversarial to engineering creates the worst outcomes: engineers route around them, controls get bolted on at audit time, and nothing improves.
The compliance team that’s a technical partner — embedded in design reviews, maintaining clear policies, building shared tooling — enables the fast-moving regulated team. Invest in that relationship early. The compliance team wants to help you build a defensible product; help them help you. ![]()