Platform Teams Are Owning Security in 2026—But Should Developers Still Learn AppSec?

I’ve been watching something fascinating unfold at our fintech startup over the past year: our platform engineering team has quietly taken ownership of almost everything security-related. Hardened container templates? Platform team. API gateway policies? Platform team. Identity and access controls? Also platform team.

And honestly? It’s been amazing for velocity. Our product teams can spin up new services without thinking about TLS configuration, secret management, or network policies. It’s all baked into the golden path.

But here’s the question that keeps me up at night: If security is increasingly baked into the platform layer, do individual developers still need deep application security expertise?

What We’re Seeing in 2026

The data is pretty clear on where the industry is heading. Platform engineering teams are becoming the central owners of security capabilities, with security delivered as a platform-level service rather than distributed across individual teams.

The playbook looks like this:

  • Secure-by-default infrastructure templates that auto-update with patches
  • Pre-configured guardrails that enforce least-privilege access patterns
  • Golden paths where the secure option is the easy option
  • Automated policy enforcement at the CI/CD and runtime layers

The traditional DevSecOps model tried to shift security left by giving every developer more security tools and responsibility. But let’s be honest—that approach produced wildly uneven results. You can’t ask developers to master identity management, compliance controls, AND ship features quickly.

The Product Manager’s Dilemma

From a business perspective, I love what centralized platform security enables:

The upside:

  • Faster time-to-market (teams don’t reinvent security wheels)
  • Consistent security posture across all services
  • Economies of scale for security expertise
  • Easier compliance and audit trails

But the risks I worry about:

  • Are we creating a generation of developers who don’t understand the security implications of their code?
  • What happens when platform guardrails don’t cover an edge case?
  • Do we lose the “defense in depth” that comes from security-aware developers?
  • Are we trading short-term velocity for long-term security debt?

The Ownership Question

Here’s the framework I’m wrestling with:

If platform teams own:

  • Infrastructure hardening
  • Network security
  • Identity and access management
  • Secret management
  • Compliance hooks

Then what should developers own?

  • Input validation and data sanitization?
  • Business logic authorization?
  • Secure coding practices?
  • Understanding the platform’s security model?
  • Nothing beyond writing features?

What I Want to Know

For the engineering leaders and platform teams here:

  1. Where do you draw the ownership line? What security responsibilities stay with developers vs move to platform teams?

  2. How do you prevent developer security skills from atrophying? If the platform handles everything, do developers lose critical security intuition?

  3. What happens at the interface? Security incidents often occur where application logic meets platform services. Who owns that boundary?

  4. Is “security literacy” enough? Should developers understand security fundamentals even if they don’t implement controls directly?

I’m genuinely curious whether this platform-centric security model is the future we should all be building toward, or if we’re inadvertently creating a dangerous skills gap.

What’s your experience been?

David, this is a great question and I see this tension daily as we scale our engineering org. But I’d challenge the either/or framing here.

It’s both/and, not either/or.

Here’s how I think about the ownership model after leading cloud migrations and platform buildouts at two companies:

Platform Teams Own the Infrastructure Layer

Our platform team absolutely owns:

  • Infrastructure security posture (hardened images, network policies, TLS)
  • Identity and access management frameworks
  • Secrets management infrastructure
  • Compliance audit hooks and logging
  • Security guardrails in CI/CD pipelines

This makes total sense. Platform teams have deep expertise in these domains, they can enforce consistency across all services, and they benefit from economies of scale.

But Developers Must Still Own Application Security

What platform teams cannot own:

  • Business logic authorization (who can do what in your domain model)
  • Input validation specific to your application’s data model
  • SQL injection prevention in custom queries
  • Cross-site scripting in rendered content
  • Race conditions in concurrent workflows
  • Sensitive data handling specific to your business rules

Real example from our cloud migration: Our platform team secured all the infrastructure—perfect network segmentation, least-privilege IAM roles, encrypted everything. Bulletproof.

But we still had a critical security incident. A developer wrote an API endpoint that checked if a user was authenticated, but didn’t verify if they were authorized to access that specific resource. Classic IDOR vulnerability.

The platform couldn’t prevent that. The framework didn’t know our business rules. Only the application developer understood that authorization context.

The Skills Developers Actually Need

The question isn’t whether developers need security expertise—it’s what kind of security expertise.

Developers don’t need to be experts in:

  • Certificate management
  • Network security protocols
  • Cloud IAM policy syntax
  • Compliance frameworks

But they absolutely need to understand:

  • OWASP Top 10 vulnerabilities and how they manifest in code
  • Authentication vs authorization (seriously, this matters)
  • Principle of least privilege in application logic
  • Secure handling of sensitive data
  • How their code interacts with the platform’s security model

The Most Dangerous Zone

The interface between application and platform is where incidents happen. When developers don’t understand what the platform is doing for them, they make dangerous assumptions:

  • “The API gateway handles auth, so I don’t need to check permissions” (wrong)
  • “The platform encrypts data at rest, so I can log credit cards” (wrong)
  • “The WAF blocks SQL injection, so I don’t need parameterized queries” (wrong)

Platform security is a foundation, not a replacement for secure application development.

To answer your specific questions:

  1. Ownership line: Platform owns infrastructure/horizontal concerns. Developers own application logic and business-specific security.

  2. Preventing atrophy: We require annual OWASP training and run internal CTFs. Security reviews are part of our promotion criteria.

  3. Interface ownership: Shared responsibility with clear documentation of what the platform provides vs what apps must implement.

  4. Security literacy: Not enough on its own. Developers need practical skills to write secure code, not just awareness.

What’s your take? Are you seeing specific security gaps emerge as your platform team takes on more responsibility?

Michelle nailed it with the both/and perspective. I’ll add the practical reality from managing 40+ engineers in financial services.

Platform teams scale security expertise, but they can’t scale context.

The Financial Services Reality Check

In our world, regulatory compliance isn’t optional. We deal with PCI-DSS, SOC 2, GDPR, and a dozen other acronyms. Here’s what I’ve learned:

Our platform team provides incredible value:

  • Hardened deployment pipelines that enforce security controls
  • Centralized audit logging that satisfies compliance requirements
  • Secret rotation mechanisms that prevent credential leakage
  • Network policies that segment environments properly

But when auditors ask “How do you ensure customer financial data is handled securely in your application logic?”—that question lands on developers, not the platform team.

The Training Approach That Works

We don’t expect developers to become security experts. But we do require security fundamentals:

Core skills every developer needs:

  1. Understanding the OWASP Top 10 (not memorizing, but recognizing in code reviews)
  2. Writing authorization checks in application code
  3. Proper input validation and sanitization
  4. Secure session management
  5. Understanding what the platform provides vs what they must implement

How we build these skills:

  • Quarterly security workshops focused on real vulnerabilities from our codebase
  • Required code review checklist that includes security considerations
  • Threat modeling sessions when designing sensitive features
  • Pair programming with security champions for high-risk changes

The goal isn’t deep security expertise—it’s security literacy with practical application.

Where Developers and Platform Must Collaborate

Here’s where the ownership model gets nuanced. Some security concerns require both teams:

Example: Handling PII (Personally Identifiable Information)

  • Platform provides: Encryption at rest, encrypted transit, audit logging infrastructure
  • Developers must: Identify what data is PII, minimize collection, implement proper retention policies, ensure authorized access only

Example: API Security

  • Platform provides: Rate limiting, DDoS protection, TLS termination, API gateway authentication
  • Developers must: Implement resource-level authorization, validate request payloads, prevent enumeration attacks, handle sensitive data in responses

The HashiCorp research on dev-security team gaps shows that miscommunication between these teams is a leading cause of vulnerabilities.

The Dangerous Assumption

The biggest risk I see: developers assuming the platform makes their code secure by default.

I’ve seen brilliant engineers write code that passes all tests and deploys cleanly through hardened pipelines—but contains authorization bugs because they assumed “if the platform let it deploy, it must be secure.”

The platform can prevent infrastructure vulnerabilities. It cannot prevent logic vulnerabilities.

What We Actually Measure

To prevent security skills from atrophying, we track:

  • Security issues found in code review (we want to see this)
  • Vulnerabilities discovered post-deployment (we want to minimize this)
  • Security training completion and knowledge retention
  • Developer participation in security design reviews

The key insight: We treat security skills like any other engineering skill—you maintain them through practice, mentoring, and continuous learning.

Answering David’s Questions

  1. Ownership line: Infrastructure/platform concerns → platform team. Application logic and business rules → developers. But document the boundaries clearly.

  2. Preventing atrophy: Integrate security into regular engineering practice. Don’t make it a separate thing. Code reviews, design reviews, incident retrospectives—security should be a dimension of all these.

  3. Interface ownership: This requires collaboration and clear contracts. We maintain a “Security Responsibilities Matrix” that maps each layer to owner.

  4. Security literacy vs expertise: Literacy is the floor, not the ceiling. Developers need enough expertise to write secure code in their domain. Platform experts need enough application context to provide the right guardrails.

The future I want to see: Platform teams make secure development easier, not make security knowledge unnecessary.

How are you all handling security training and skill development at your orgs?

Coming from the design systems world, this conversation feels very familiar!

The platform vs developer security debate reminds me so much of the design systems question: if we provide components with accessibility baked in, do designers still need to understand accessibility principles?

Spoiler: yes, they absolutely do.

The Design Systems Parallel

Here’s what happened at my startup (before we imploded—lessons learned the hard way):

We built a beautiful design system with accessible components:

  • Form inputs with proper ARIA labels
  • Color palettes with WCAG-compliant contrast ratios
  • Keyboard navigation built into every interactive element
  • Screen reader support out of the box

We told product designers: “Use these components and your designs will be accessible!”

And then we shipped a user flow that was completely inaccessible.

Why? Because accessibility isn’t just about individual components—it’s about how you compose them, the information architecture, the content hierarchy, the cognitive load of the entire experience.

The design system provided accessible building blocks. But designers still needed to understand accessibility principles to use them correctly.

My Startup’s Painful Security Lesson

We made the same mistake with security. Our platform team set up “secure-by-default” infrastructure:

  • Input sanitization middleware
  • CSRF protection
  • Rate limiting
  • Encrypted data stores

We thought we were bulletproof because we were “using the secure platform.”

Then we had a critical bug where users could access other users’ data. Not through SQL injection or XSS—those were prevented by the platform.

It was a business logic flaw. Our recommendation engine exposed data based on similarity matching without checking if the requester had permission to see that data.

The platform couldn’t prevent that. We needed to understand authorization at the application level.

Security Literacy vs Security Expertise

Michelle and Luis are both right about developers needing security skills. But I want to push back on what “security literacy” means.

Literacy should be more than passive awareness.

I think of it like this:

Security Awareness (not enough):

  • “I know SQL injection exists”
  • “I heard about OWASP Top 10”
  • “The platform probably handles this”

Security Literacy (minimum bar):

  • “I can recognize SQL injection patterns in code review”
  • “I know which OWASP risks apply to my code and which the platform prevents”
  • “I can explain what the platform provides vs what I need to implement”

Security Expertise (ideal for senior devs):

  • “I can threat model a new feature”
  • “I can design authorization systems that handle complex business rules”
  • “I can evaluate security trade-offs and communicate risk to stakeholders”

Making Guardrails Educational

Here’s my question for platform teams: Can your security guardrails teach developers while protecting them?

What if instead of silently preventing insecure code, your platform actively educated developers?

Examples I’ve seen work well:

  • Pre-commit hooks that explain why a pattern is insecure, not just “blocked”
  • Security linters with links to secure alternatives
  • PR comments from automated tools that teach best practices
  • Dashboards showing which platform security features your service is using

Make invisible magic visible and educational.

If developers don’t understand what the platform is doing for them, they can’t:

  • Know when they’ve stepped outside the guardrails
  • Make informed decisions about security trade-offs
  • Debug security issues when they occur
  • Grow their security skills over time

The Skills Gap I Worry About

David, you asked if we’re “creating a generation of developers who don’t understand security implications of their code.”

I think the answer is yes, if we’re not intentional about it.

But it doesn’t have to be that way. Platform teams can be force multipliers for security education, not just security enforcement.

The best platform teams I’ve worked with:

  • Document what they provide and what you still own (like Luis’s Security Responsibilities Matrix)
  • Run workshops showing common vulnerabilities and how the platform prevents them
  • Create runbooks for “I want to do X securely, what do I use?”
  • Build security into code review culture, not just automated tooling

My Answer to David’s Questions

  1. Ownership line: Platform owns infrastructure primitives. Developers own application composition and business logic. Shared responsibility for integration points.

  2. Preventing atrophy: Make security feedback loops fast and educational. Use platform tooling as teaching moments, not just gates.

  3. Interface ownership: Both own it, with clear contracts. The platform should provide primitives (auth, encryption) and developers should use them correctly for their domain.

  4. Security literacy: It’s necessary but not sufficient. Developers need enough expertise to use platform security correctly AND implement application-layer security.

The goal isn’t to eliminate the need for security knowledge—it’s to lower the floor and raise the ceiling.

Platform teams lower the floor by making basic security automatic. But we need to raise the ceiling by helping developers learn deeper security skills over time.

What do you all think? How can platform teams balance enforcement with education?

This thread is hitting on something critical that I’ve been thinking about while scaling our engineering org from 25 to 80+ people.

The real question isn’t “who owns security”—it’s “how do we create clear ownership boundaries so nothing falls through the cracks?”

The Organizational Effectiveness Lens

Here’s what I’ve seen go wrong when the platform-developer security boundary isn’t clearly defined:

Scenario 1: Everyone Assumes Someone Else Owns It

  • Platform team: “We secured the infrastructure, application security is the dev team’s job”
  • Dev team: “The platform team are the security experts, they’ll catch issues”
  • Result: Critical vulnerability ships to production

Scenario 2: Overlap Creates Friction

  • Platform team blocks a deploy because it doesn’t use their blessed authentication library
  • Dev team argues they need custom auth logic for their specific use case
  • Result: Two weeks of back-and-forth, delayed feature, resentment on both sides

Scenario 3: Unclear Escalation Path

  • Developer discovers what might be a security issue
  • Doesn’t know if it’s a platform concern or application concern
  • Result: Security issue sits unaddressed while teams debate ownership

The Framework That Actually Works

After trying a few approaches, here’s what landed for us. We created a Shared Responsibility Model with three zones:

Zone 1: Platform-Owned (Infrastructure Security)

Clear platform team ownership:

  • Base image hardening and patching
  • Network segmentation and policies
  • Identity provider integration (SSO, RBAC framework)
  • Secrets storage and rotation infrastructure
  • Compliance logging and audit trails
  • CI/CD security controls

Developer responsibility: Use these correctly. Don’t work around them.

Zone 2: Developer-Owned (Application Security)

Clear dev team ownership:

  • Business logic authorization (resource-level access control)
  • Input validation and output encoding
  • SQL injection, XSS, CSRF prevention in application code
  • Session management and authentication flows
  • PII/sensitive data handling in application logic
  • Application-specific threat modeling

Platform responsibility: Provide secure primitives and clear documentation.

Zone 3: Shared Ownership (Integration Security)

Both teams collaborate:

  • Service-to-service authentication and authorization
  • API security (platform provides gateway/WAF, devs design secure endpoints)
  • Data encryption (platform provides KMS, devs determine what to encrypt)
  • Security incident response (platform provides tooling, devs provide context)

Key insight: Shared ownership requires explicit communication and joint review.

How We Prevent Security Skill Atrophy

Michelle asked about maintaining security skills when the platform handles so much. Here’s our approach:

1. Security Is a Promotion Criterion

  • L3 (mid-level): Can identify common vulnerabilities in code review
  • L4 (senior): Can threat model features and propose security solutions
  • L5 (staff): Can design secure systems and mentor others on security

Making security part of career progression ensures it doesn’t get deprioritized.

2. Security Champions Program

  • Each product team has a security champion (not a separate security person)
  • Monthly security guild meetings to share learnings
  • Platform team works with champions to cascade knowledge
  • Champions become the first escalation point for security questions

3. Incident Retros Are Educational
When we find a security issue (whether caught in review or post-deploy):

  • We do a blameless retro focused on learning
  • We document: What happened? What controls failed? How do we prevent it?
  • We update our review checklists and training materials
  • We share learnings in engineering all-hands

The data: Since implementing this, our security issues caught in code review went up 3x (good!), while post-deploy security bugs dropped 60% (also good!).

Addressing David’s Specific Questions

Let me answer your questions from the organizational design perspective:

1. Where do you draw the ownership line?

Use the framework above, but customize it to your context. Document it clearly. Make it accessible. Update it as you learn.

We maintain a public wiki page called “Security Responsibilities Map” that every new engineer reviews during onboarding.

2. How do you prevent developer security skills from atrophying?

Make security part of the job, not a separate thing:

  • Code reviews include security dimension
  • Design reviews include threat modeling
  • Incidents include security retrospectives
  • Promotions include security competency

3. What happens at the interface?

This is where security teams and developers historically clash.

Our solution: Collaboration rituals

  • Security design reviews for high-risk features (platform + dev + security champion)
  • Monthly “security office hours” where devs can ask questions
  • Shared on-call rotation for security incidents
  • Joint retrospectives when issues occur

4. Is “security literacy” enough?

Maya’s framework is perfect: Awareness < Literacy < Expertise.

We need:

  • Every developer to have security literacy (recognize issues, know when to escalate)
  • Senior+ engineers to have security expertise in their domain
  • Platform/security teams to have deep security expertise across domains

The Future I Want to See

I love what Maya said about “lower the floor and raise the ceiling.”

Platform teams should:

  • Make the secure path the easy path (lower the floor)
  • Provide clear escalation for complex scenarios (prevent blockers)
  • Educate rather than just enforce (raise the ceiling)

Developer teams should:

  • Own application security in their domain (clear accountability)
  • Maintain security fundamentals as core skills (through practice and training)
  • Collaborate with platform on boundary cases (partnership not antagonism)

Together:

  • Co-create the shared responsibility model
  • Update it based on incidents and learnings
  • Measure effectiveness (not just compliance)

What We Actually Measure

We track:

  • Leading indicators: Security issues found in review, threat modeling coverage, security training completion
  • Lagging indicators: Vulnerabilities in production, time to remediate, repeat issues
  • Culture indicators: Developer confidence in security practices (we survey quarterly)

The goal isn’t zero vulnerabilities (unrealistic). It’s continuous improvement in our security posture AND our security culture.

David, to your original question: Yes, platform teams are owning more security infrastructure in 2026. But that should amplify developer security skills, not replace them.

Platform-provided security is a foundation. Developer security knowledge is how you build safely on that foundation.

What are others measuring to ensure security responsibility is clear and effective?