Low-code hit $44.5B this year. As someone who builds with Webflow daily, I'm torn—are we gaining leverage or losing our edge?

I’ve been thinking a lot about this lately, and I need some perspective from this community.

The low-code market just hit $44.5 billion this year. Billion. And according to Gartner, 70% of new enterprise applications will be built with low-code or no-code by 2026. We’re already there—this isn’t some distant future, it’s happening right now.

My complicated relationship with low-code

As a design systems lead, I live in this weird intersection. I build component libraries for engineers, but I also use Webflow for prototypes and side projects. When my startup was still alive (RIP 2024), we built our first MVP entirely in Webflow + Airtable. Shipped in 3 weeks. Felt like magic.

Then we tried to scale.

We hit every wall you can imagine. Performance issues. Integration nightmares. That moment when you realize the abstraction is beautiful until it breaks, and then you’re staring at generated code you don’t understand, frantically Googling error messages that lead nowhere.

We eventually rebuilt everything from scratch. The rebuild took 4 months. The startup died 6 months later (for unrelated reasons, but still).

The promise vs. the reality

The numbers are wild:

  • Development teams using no-code are 2.7× faster
  • Pipeline development time reduced 60-70% compared to traditional approaches
  • Application development can be up to 90% faster

But here’s what I’m seeing in the real world: Engineers on my team who can ship features incredibly fast with our design system + low-code tools, but when something breaks at a fundamental level, they freeze. They don’t know how to debug below the abstraction layer.

And I get it—I’m the same way sometimes. I can build beautiful experiences in Webflow, but ask me to optimize a database query and I’m lost.

The trade-off nobody talks about

Every article about low-code celebrates the democratization. “Citizen developers outnumber professional developers 4:1 now!” Great. But what are we actually democratizing?

I keep coming back to something I read: “Early developers blindly using low-code or no-code tools without learning the fundamental principles of writing code will inevitably hit a ceiling.”

That ceiling is real. I’ve hit it. My failed startup hit it hard.

But here’s the counter-argument I keep wrestling with: Haven’t we always abstracted? Nobody writes assembly anymore. We use frameworks, libraries, ORMs, build tools. Where do we draw the line between “helpful abstraction” and “you’re not learning fundamentals”?

My actual concern

It’s not that low-code exists. It’s not even that people are using it. It’s that I’m watching a generation of builders learn to use abstractions without understanding how to build them.

In design systems work, I’ve learned that creating good abstractions requires deep understanding of the underlying system. You can’t build a good component API unless you understand React’s render cycle. You can’t create flexible design tokens unless you understand CSS specifics.

But if 80% of low-code users are coming from non-IT departments (another Gartner stat), and citizen developers outnumber professionals 4:1, who’s going to build the next generation of abstractions?

Who’s going to debug them when they break?

So here’s my question

Are we gaining leverage or losing our edge?

Is this like moving from assembly to C to Python—natural evolution where each generation builds on better abstractions?

Or are we creating a dependency where fewer and fewer people understand the underlying systems, until something breaks and nobody knows how to fix it?

I genuinely don’t know. And that uncertainty keeps me up at night.

What are you seeing in your work? Where do you draw the line?

Maya, this hits close to home. I’m dealing with this exact tension managing 40+ engineers across distributed teams.

What I’m seeing on the ground

Your observation about engineers freezing when abstractions break? That’s not just anecdotal—it’s becoming a pattern I’m actively trying to address in my organization.

We recently had an incident where a low-code integration platform (which will remain nameless) had a subtle configuration bug that caused data inconsistencies in our financial reconciliation system. The engineer who built it was brilliant at configuring workflows through the UI, shipped the feature in a week instead of the projected month.

But when things broke, they couldn’t trace the actual data flow. Didn’t understand the underlying API contracts. Couldn’t read the logs effectively because they’d never had to debug at that level.

The senior engineer who fixed it took 30 minutes once they understood the problem. But it took us 6 hours to escalate because we didn’t realize it required that level of expertise.

The trade-off is real, but it’s not new

Here’s where I partially disagree with the framing: This isn’t actually a new problem. It’s the eternal tension between speed for features vs depth for maintenance.

20 years ago, we had the same debate about ORMs. “Developers who only know ActiveRecord can’t optimize SQL queries!” True then, true now.

15 years ago: “Developers who only know jQuery don’t understand the DOM!” Also true.

10 years ago: “Developers who only use React don’t understand how browsers render!” Still true!

The abstraction layers keep stacking. Low-code is just the newest layer.

My pragmatic approach

After wrestling with this for the past year, here’s the framework I use with my teams:

Low-code for:

  • MVPs and proof-of-concepts
  • Internal tools with limited scale requirements
  • Well-defined problems with clear boundaries
  • Situations where time-to-market is the primary constraint

Traditional code for:

  • Core business logic and differentiation
  • Systems requiring performance optimization
  • Anything customer-facing at scale
  • Features where we need deep observability and debugging

The key differentiator: How mission-critical is it, and what’s the cost of failure?

The skill development problem

But you’re right about the deeper concern. I’m seeing a bifurcation in my engineering organization:

  • Abstraction builders: Senior engineers who understand systems deeply, can debug anything, build platforms
  • Abstraction users: Talented engineers who ship fast with tools but struggle with fundamentals

The problem? We’re not growing people from the second group into the first as effectively as we used to.

Traditional career path: Start by fighting with pointers and memory management (okay, maybe we don’t need that), build up through increasing abstraction layers, eventually understand the full stack deeply enough to choose the right tool for each problem.

New path: Start at high abstraction, ship value immediately, but maybe never forced to learn the lower levels because the abstraction is good enough 90% of the time.

What I’m trying

My experiment: Every engineer on my team, regardless of seniority, must spend 20% of their time on “one layer down” work.

If you primarily work in low-code tools, 20% of your time should be writing actual code.

If you primarily write application code, 20% should be infrastructure or performance optimization.

If you’re primarily in infrastructure, 20% should be contributing to open source or understanding hardware.

Too early to tell if it works. But the hypothesis is that understanding the layer below makes you better at choosing abstractions above.

Your question about who builds the next abstractions

This keeps me up at night too. If everyone is consuming abstractions and fewer people are building them, we’re creating a dangerous dependency.

But I think the answer is: We need both, deliberately.

We need the 80% of people who can ship value quickly with good-enough tools.

And we need the 20% who understand systems deeply enough to build, maintain, and evolve those tools.

The key is making sure we’re intentionally developing that 20%, not accidentally losing them because low-code is “good enough.”

What are others seeing? Is anyone else trying structured approaches to preserve fundamental skills while gaining low-code speed benefits?

Coming at this from the product side, and I’m going to be a bit contrarian here.

The business reality nobody wants to talk about

Maya, you mentioned your startup died 6 months after the 4-month rebuild. I’m going to ask a hard question: Would it have lived if you’d stayed on the Webflow + Airtable stack?

Because from a product lens, the trade-off isn’t “perfect code vs technical debt.” It’s “shipping fast enough to find product-market fit vs dying before you get there.”

I’ve watched three pivots at my current startup. If we’d spent 4 months on each rebuild instead of 3 weeks on each MVP, we’d be dead. We wouldn’t have the funding. We wouldn’t have learned what customers actually wanted. We’d be perfectly engineered into bankruptcy.

The 60-70% faster pipeline matters more than you think

You mentioned teams being 60-70% faster with low-code. Let me translate that into business terms:

Scenario A: Traditional development

  • Build feature: 4 weeks
  • Launch, get customer feedback: 1 week
  • Iterate based on feedback: 4 weeks
  • Total: 9 weeks to second iteration

Scenario B: Low-code development

  • Build feature: 1.5 weeks
  • Launch, get customer feedback: 1 week
  • Iterate based on feedback: 1.5 weeks
  • Total: 4 weeks to second iteration

That’s not just 2× faster. That’s 2 full feedback cycles vs 1 in the same timeframe.

For early-stage products, that difference is existential. You’re not optimizing for code quality. You’re optimizing for learning speed.

When “technical debt” is actually technical investment

Alex is right that every abstraction has limits. But Luis’s framework misses something critical: Not all features are worth custom code.

Here’s my product prioritization lens:

Low-code for:

  • Features that might change dramatically based on customer feedback
  • Internal tools that save time but aren’t customer-facing
  • Features where “good enough” is actually good enough
  • Anything that helps us learn faster

Custom code for:

  • Core value proposition and differentiation
  • Features that hit low-code platform limits (you’ll know when)
  • Anything customer-facing that requires specific UX or performance
  • Systems that will scale with the business

The key insight: Most features don’t need to be perfect. They need to exist.

The “rebuild everything” trap

Maya, I think your startup’s story is actually the opposite lesson from what you took from it.

You spent 4 months rebuilding. The startup died 6 months later. What if you’d spent those 4 months iterating on product-market fit instead?

What if the Webflow limitations weren’t actually blocking your growth, but you convinced yourself they were because engineers are trained to see technical debt as bad?

I’ve seen this pattern so many times: Engineering teams rebuild perfectly good MVPs because they’re “not scalable,” then the company dies before they ever need that scale.

The gatekeeping point

Alex mentioned gatekeeping, and I think that’s real. From a product perspective, the “you need to understand fundamentals” argument often translates to: “You need engineers for everything.”

But do you?

If our marketing team can build their own dashboards in a low-code BI tool, should I really require them to file Jira tickets and wait 2 weeks for engineering to build it?

If our operations team can automate workflows with Zapier, should I make them dependent on engineering’s roadmap?

Low-code democratizes building. That’s not a bug, that’s the feature.

My actual concern is different from yours

Luis worries about engineers who can’t debug below the abstraction. Fair.

Maya worries about losing fundamentals. I get it.

My concern? Teams that over-engineer too early and die before they find product-market fit.

I’d rather have a “hacky” Webflow + Airtable MVP that gets customer feedback in 3 weeks than a beautifully architected custom platform that takes 4 months and gets the same feedback.

The question isn’t “are we losing fundamentals?” It’s: “When does investing in fundamentals actually create business value?”

When to actually care about fundamentals

Here’s when I push engineering to rebuild with “proper” code:

  1. We’ve validated product-market fit and now need to scale
  2. The low-code platform is provably limiting growth (not theoretically—actually costing us customers or money)
  3. The technical debt is creating operational burden that slows iteration (not just “it’s not elegant”)
  4. We have runway to invest in the rebuild without killing other priorities

Before those conditions? Ship with whatever gets customer value fastest.

After those conditions? Invest in the right abstractions for your specific scale.

The uncomfortable truth

Low-code platforms exist because most software doesn’t need to be perfectly engineered.

Most features will be deprecated before they need to scale. Most MVPs will pivot before they need optimization. Most internal tools will never have more than 50 users.

The $44.5B market isn’t an accident. It’s the market telling us: “For most problems, good enough is actually good enough. Stop over-engineering.”

That doesn’t mean fundamentals don’t matter. It means: Choose when to invest in fundamentals based on business value, not engineering aesthetics.

Sometimes the right answer is low-code. Sometimes it’s custom code. The skill is knowing which, and when to transition.

But if you’re an early-stage startup dying because you spent 4 months rebuilding instead of finding customers? You optimized for the wrong thing.

Wow, this thread went exactly where I needed it to. Thank you all for the perspectives—this is why I love this community.

What I’m taking away

Alex’s reframing hit me hard: “The real question isn’t ‘are we losing fundamentals?’ It’s: ‘Is this abstraction well-designed for the problem I’m trying to solve?’”

That shifts the conversation from defensive (what are we losing?) to strategic (what are we choosing?). I needed that.

Luis’s “20% one layer down” rule is brilliant. I’m already thinking about how to apply this to my design systems work. Maybe: If you primarily work in Figma, spend 20% understanding component implementation. If you implement components, spend 20% understanding browser rendering.

David’s question broke me a little: “Would your startup have lived if you’d stayed on Webflow + Airtable?”

Honest answer? I don’t know. But you’re right that I never tested that hypothesis. We assumed we needed the rebuild. We never asked if the technical limitations were actually blocking growth or just offending our engineering sensibilities.

That’s a hard pill to swallow.

But I still have concerns

The gatekeeping point—I hear it, and I’m trying to examine my own biases here. I don’t want to be the person saying “you must suffer through pointers and manual memory management to be a real engineer.”

But here’s what still worries me:

When David says “most features will be deprecated before they need to scale,” he’s right. But someone still needs to know when you’ve hit the limits. Someone needs to recognize: “Okay, this abstraction worked for validation, but now we need to rebuild.”

And if 80% of low-code users are from non-IT departments, who’s making that call?

The citizen developer question

I keep coming back to this: Citizen developers outnumber professional developers 4:1.

That’s amazing for democratization. But it also means most people building software have never experienced that moment when an abstraction breaks and you need to understand the layer below.

Alex says “when you hit the ceiling, you drop down a layer.” But what if you don’t know how? What if you’ve never had to?

Luis’s bifurcation is real: Abstraction builders vs abstraction users.

My worry isn’t that we have both. It’s that we’re not intentionally creating pathways from user to builder for people who want to grow.

What I’m actually asking for

I think what I’m really asking is: How do we design for optionality?

  • Start with low-code for speed ✓
  • Use it as long as it works ✓
  • But build the team’s capability to recognize when it’s failing ✓✓✓
  • And have a plan for what happens when you hit the ceiling ✓

That last piece—the exit strategy—is what I didn’t have. We hit the ceiling and panicked into a 4-month rebuild that might not have been necessary.

If I could go back, I’d ask:

  1. What exactly is the low-code platform limiting?
  2. Can we work around it without a full rebuild?
  3. What’s the minimum viable migration that unblocks us?
  4. What would we learn from customers in 4 months that’s worth more than a perfect architecture?

David’s framing: “When does investing in fundamentals actually create business value?” That’s the right question.

Design systems parallel

This whole conversation mirrors something I’m dealing with in design systems.

We build components (abstractions) for product teams (users). The goal is to let them ship fast without reinventing patterns.

But the best product teams also understand when to break out of the system. They know the constraints well enough to recognize: “This component works for 90% of cases, but our use case is the 10%.”

The worst teams either:

  • Never use the system (reinvent everything, slow)
  • Blindly use the system even when wrong (ship broken experiences, fast but wrong)

The best teams use abstractions strategically. That’s the skill.

My updated mental model

Low-code is a tool for learning and iteration speed.

The skill isn’t “knowing fundamentals” in the abstract. It’s:

  1. Knowing when the abstraction is good enough (don’t over-engineer)
  2. Recognizing when you’ve hit the ceiling (don’t under-engineer)
  3. Having the skills to drop down a layer when needed (build capability)
  4. Making that call based on business value, not engineering pride (product thinking)

That’s more nuanced than “low-code bad” or “fundamentals always matter.”

One last vulnerable admission

Reading David’s take on my startup… it stings because it might be right.

We killed ourselves rebuilding instead of iterating. We assumed technical debt was our problem when maybe product-market fit was our problem.

I’ll never know for sure. But that’s a lesson I won’t forget.

Thanks for this discussion. I came here worried about abstractions. I’m leaving thinking about optionality, business value, and the difference between engineering pride and engineering judgment.

That’s the real fundamental that matters.