CES 2026 AI PC Analysis: NPUs, TOPS, and What Actually Matters for ML Work

The NPU (Neural Processing Unit) marketing at CES 2026 was overwhelming. Every laptop now has AI performance specs, and I wanted to cut through the noise for people who actually do ML work.

The TOPS Scorecard

Here’s what the major players announced:

Vendor Chip NPU TOPS Notes
Intel Core Ultra Series 3 50 First Intel 18A, certified for edge
AMD Ryzen AI Max+ 60 Unified memory, integrated graphics
Qualcomm Snapdragon X Plus 2 ~45 ARM architecture
HP EliteBook Snapdragon X2 Elite 85 “World’s first business notebook” at this level

What TOPS Actually Means

TOPS = Trillion Operations Per Second. It’s a measure of raw NPU compute throughput.

What it tells you:

  • Peak theoretical performance for specific operations (usually INT8)
  • General NPU capability ballpark

What it doesn’t tell you:

  • Real-world inference performance
  • Which models are supported
  • Memory bandwidth limitations
  • Software stack maturity

A 60 TOPS chip with poor software support will underperform a 45 TOPS chip with mature tooling.

What Actually Matters for ML Work

If you’re doing ML work on a laptop, here’s my prioritized checklist:

1. Memory bandwidth and capacity
LLM inference is memory-bound, not compute-bound. AMD’s unified memory architecture is more important than raw TOPS for running large models locally.

2. Software stack support
Can you run PyTorch/TensorFlow/ONNX without conversion headaches? Intel has OpenVINO, AMD has ROCm (improving), Qualcomm has its own stack. None are as mature as CUDA.

3. Thermal performance
Sustained performance matters more than peak. A chip that throttles after 30 seconds of inference is useless for real work.

4. Actual model benchmarks
I want to see LLaMA-2 7B inference latency, not image classification TOPS. These are different workloads.

The Copilot+ Question

Microsoft’s Copilot+ PC requirements (40+ TOPS NPU) are driving the spec race. But Copilot+ features are still limited:

  • Recall (screenshot search) - privacy concerns got it delayed
  • Live Captions with translation
  • Windows Studio Effects for video calls
  • Creative AI features in apps

None of these are compelling enough to drive laptop purchases on their own. The question is whether third-party apps will leverage NPUs effectively.

My Recommendations

For ML engineers/researchers:

  • Wait for real benchmarks on LLM inference, not marketing TOPS
  • Prioritize memory (32GB+) and memory bandwidth
  • Consider AMD Ryzen AI Max+ for the unified memory architecture
  • Don’t abandon your cloud GPU instances yet

For developers experimenting with AI:

  • Any Copilot+ PC will be fine for local experimentation
  • The software ecosystem matters more than hardware specs
  • Focus on models that fit in your memory budget

For business users:

  • Honestly? Dell is right that consumers aren’t buying for AI features
  • Buy for traditional laptop qualities (display, keyboard, battery)
  • AI features are nice-to-have, not must-have

The Uncomfortable Reality

Local AI on laptops is still early. The hardware is getting better, but:

  • Software stacks are fragmented and immature
  • Serious ML work still needs cloud GPUs or dedicated hardware
  • Consumer AI features don’t justify NPU hardware… yet

We’re in the “installing plumbing” phase. The applications that make this worthwhile are still being built.

What’s your experience running ML models locally? Anyone tried the new NPU-accelerated inference?

Great analysis Rachel. Adding the enterprise deployment perspective since we’re evaluating AI PC refresh cycles.

The Enterprise Procurement Reality

At our scale, we don’t buy laptops for individual features. We buy fleet standards that balance:

  • Total cost of ownership over 3-4 years
  • IT support complexity
  • Security and compliance requirements
  • User productivity

When vendors pitch us AI PCs, the conversation goes like this:

Vendor: “This has 60 TOPS NPU performance!”
Us: “What does that enable that justifies the price premium?”
Vendor: “Copilot+ features and local AI acceleration!”
Us: “Show us the business case.”

So far, nobody has shown us a compelling business case for NPU-heavy machines for typical enterprise users.

Where AI PCs Actually Make Sense in Enterprise

There are specific roles where local AI acceleration has clear value:

  1. Developers - Running models locally, testing inference, prototyping
  2. Data scientists - Experimentation and iteration on local hardware
  3. Creative professionals - Adobe’s AI features, video editing
  4. Field workers - Offline AI capabilities in disconnected environments

But these are maybe 10-15% of our workforce. The other 85% are running Office, email, and web apps. They don’t need 60 TOPS.

The Real Procurement Question

We’re asking vendors:

  • What’s the incremental cost for NPU capability?
  • What’s the battery life impact?
  • What IT management changes are required?
  • What security considerations are there for local AI?

Most vendors can’t answer these clearly yet. The technology is ahead of the ecosystem.

My Recommendation to Other Enterprise Buyers

  1. Don’t refresh your entire fleet for AI features
  2. Pilot AI PCs with roles that have clear use cases
  3. Wait 12-18 months for software ecosystem maturity
  4. Focus on memory and SSD upgrades - those provide immediate benefits

The NPU will become table stakes eventually, but we’re not there yet.

Adding the CTO provisioning perspective here because we’re actively making these decisions.

The Machine Tiers Approach

Rather than one-size-fits-all, we’re moving to tiered machine standards:

Tier 1: Standard Knowledge Worker

  • Office, browser, video conferencing
  • Current-gen CPU, 16GB RAM
  • No special NPU requirements
  • This is 70% of our organization

Tier 2: Technical Staff

  • Developers, analysts, power users
  • 32GB+ RAM, good GPU
  • Copilot+ capable but not NPU-optimized
  • About 20% of org

Tier 3: AI-Intensive Roles

  • ML engineers, data scientists, AI researchers
  • 64GB+ RAM, high-end NPU or discrete GPU
  • AMD Ryzen AI Max+ or similar
  • About 10% of org

The key insight: NPU investment should follow actual AI workload needs, not marketing hype.

What I’m Telling My Leadership Team

  1. AI PC features will become standard - Every laptop will have NPUs within 2-3 years. This isn’t a competitive advantage, it’s a commodity.

  2. The real investment is in AI applications - The hardware is table stakes. The value comes from how we use AI in our workflows.

  3. Don’t over-provision today - NPU hardware is improving rapidly. Machines bought in 2026 will be outdated by 2027. Buy what you need, refresh strategically.

  4. Focus on software enablement - Better to have modest hardware with good AI software than powerful NPUs with no applications that use them.

The Lenovo Qira Factor

Lenovo’s announcement of Qira - a “personal AI super agent” that works across devices - is interesting. This is where the value might actually emerge:

  • AI that understands your context across all your devices
  • Automated workflows that span applications
  • Personal assistants that actually work

But this requires mature software, not just hardware TOPS.

My Prediction

By CES 2027, NPUs will be standard in every laptop and the conversation will shift from “do you have AI?” to “what AI applications do you support?”

The current TOPS race is the new megahertz race - a metric that will become irrelevant as the market matures.

Adding the creative/design perspective since we’re often cited as the reason companies need powerful machines.

Where Local AI Actually Helps Design Work

I’ve been using AI features in creative tools for the past year. Here’s what actually works:

Generative Fill in Photoshop - Genuinely useful for quick mockups and ideation. Works better with more VRAM/memory.

AI Upscaling - Super useful for working with lower-resolution source assets. NPU acceleration makes this faster.

Background Removal - Works great, saves tons of time. This runs on-device now.

Generative AI for Wireframes - Mixed results. Good for brainstorming, not for final work.

What Doesn’t Work (Yet)

AI-generated UI components - The output quality isn’t production-ready. Still need human designers.

Auto-layout suggestions - Tools like Figma’s AI features are helpful but not transformative.

“AI” design review - Gimmicky. Human feedback is still essential.

The Machine Spec Question

For my team’s design work:

  • 32GB RAM is the sweet spot (lets you run AI features without closing other apps)
  • Good GPU matters more than NPU for most Adobe workflows
  • Fast SSD matters for working with large files
  • NPU is nice-to-have, not essential

I agree with Michelle’s tiering approach. Most designers fit in Tier 2, not Tier 3. We benefit from AI features, but we’re not running LLMs locally.

The Bigger Design Question

For me, the interesting AI discussion isn’t about NPU specs - it’s about how AI changes design workflows:

  • Do AI tools help us work faster? (Sometimes yes)
  • Do they make us better designers? (Not really)
  • Do they change what we design? (This is the interesting question)

If AI handles mechanical tasks (background removal, asset generation, resizing), designers can focus more on strategic and creative work. That’s the promise. Whether it materializes depends on software, not hardware TOPS.