Skip to main content

21 posts tagged with "context-engineering"

View all tags

Tool Output Compression: The Injection Decision That Shapes Context Quality

· 10 min read
Tian Pan
Software Engineer

Your agent calls a database tool. The query returns 8,000 tokens of raw JSON — nested objects, null fields, pagination metadata, and a timestamp on every row. Your agent needs three fields from that response. You just paid for 7,900 tokens of noise, and you injected all of them into context where they'll compete for attention against the actual task.

This is the tool output injection problem, and it's the most underrated architectural decision in agent design. Most teams discover it the hard way: the demo works, production degrades, and nobody can explain why the model started hedging answers it used to answer confidently.

Compaction Traps: Why Long-Running Agents Forget What They Already Tried

· 9 min read
Tian Pan
Software Engineer

An agent calls a file-writing tool. The tool fails with a permission error. The agent records this, moves on to a different approach, and eventually runs long enough that the runtime triggers context compaction. The summary reads: "the agent has been working on writing output files." What it drops: that the permission error ever happened, and why the original approach was abandoned. Three hundred tokens later, the agent tries the same write again.

This pattern — call it the compaction trap — is one of the most persistent reliability failures in production agent systems. It's not a model bug. It's an architecture mismatch between how compaction works and what agents actually need to stay coherent across long sessions.

Long-Session Context Degradation: How Multi-Turn Conversations Go Stale

· 8 min read
Tian Pan
Software Engineer

The first time a user's 80-turn support conversation suddenly started contradicting advice given 60 turns ago, the team blamed a bug. There was no bug. The model was simply lost. Across all major frontier models, multi-turn conversations show an average 39% performance drop compared to single-turn interactions on the same tasks. Most teams never measure this. They assume context windows are roughly as powerful as their token limit suggests, and they build products accordingly.

That assumption is quietly wrong. Long sessions don't just get slower or more expensive — they get unreliable in ways that are nearly impossible to notice until users are already frustrated.

Coding Agents in the Monorepo: Why Context Windows and 50-Service Repos Don't Mix

· 9 min read
Tian Pan
Software Engineer

Here's a failure mode that happens silently: you ask a coding agent to update the authentication service's token refresh endpoint. The agent produces clean-looking code — confident, well-commented, type-safe. It also calls a method signature that was renamed three months ago in a shared library three directories up. The tests for that endpoint pass because the mock still uses the old signature. The bug surfaces in staging when the real library gets pulled in.

This isn't a hallucination in the abstract sense. The model knew about that method — it existed somewhere in the training data or was briefly visible in context. The problem is architectural: the agent never had access to the current version of the interface it was calling.

Context Poisoning in Long-Running AI Agents

· 9 min read
Tian Pan
Software Engineer

Your agent completes step three of a twelve-step workflow and confidently reports that the target API returned a 200 status. It didn't — that result was from step one, still sitting in the context window. By step nine, the agent has made four downstream calls based on a fact that was never true. The workflow "succeeds." No error is logged.

This is context poisoning: not a security attack, but a reliability failure mode where the agent's own accumulated context becomes a source of wrong information. As agents run longer, interact with more tools, and manage more state, the probability of this failure climbs sharply. And unlike crashes or exceptions, context poisoning is invisible to standard monitoring.

The 10x Prompt Engineer Myth: Why System Design Beats Prompt Wordsmithing

· 8 min read
Tian Pan
Software Engineer

There is a persistent belief in the AI engineering world that the difference between a mediocre LLM application and a great one comes down to prompt craftsmanship. Teams hire "prompt engineers," run dozens of A/B tests on phrasing, and spend weeks agonizing over whether "You must" outperforms "Please ensure." Meanwhile, the retrieval pipeline feeds garbage context, there is no output validation, and the error handling strategy is "hope the model gets it right."

The data tells a different story. The first five hours of prompt work on a typical LLM application yield roughly a 35% improvement. The next twenty hours deliver 5%. The next forty hours? About 1%. Teams that recognize this curve early and redirect effort into system design consistently outperform teams that keep polishing prompts.

Token Budget as Architecture Constraint: Designing Agents That Work Under Hard Ceilings

· 8 min read
Tian Pan
Software Engineer

Your agent works flawlessly in development. It reasons through multi-step tasks, calls tools confidently, and produces polished output. Then you set a cost cap of $0.50 per request, and it falls apart. Not gracefully — catastrophically. It truncates its own reasoning mid-thought, forgets tool results from three steps ago, and confidently delivers wrong answers built on context it silently lost.

This is the gap between abundance-designed agents and production-constrained ones. Most agent architectures are prototyped with unlimited token budgets — long system prompts, verbose tool schemas, full document retrieval, uncompacted conversation history. When you introduce hard ceilings (cost caps, context limits, latency requirements), these agents don't degrade gracefully. They break in ways that are difficult to detect and expensive to debug.

The Context Window as IDE: Why AI Coding Agents Succeed or Fail Based on What They Can See

· 10 min read
Tian Pan
Software Engineer

The real differentiator in AI coding tools is no longer model quality — it's what the model can see. Two developers using the same underlying LLM will get wildly different results depending on how their tooling retrieves, ranks, and packs code context into the model's working memory. The context window has become the IDE, and most teams don't realize their agent is working blind.

This matters because practitioners routinely blame the model when their coding agent produces hallucinated function calls, ignores existing utilities, or generates code that contradicts project conventions. In most cases, the model never saw the relevant code. The retrieval pipeline failed, not the reasoning.

Agentic Engineering: Build Your Own Software Pokémon Army

· 18 min read
Tian Pan
Software Engineer

How one person replaced a 15-person engineering team with autonomous AI agents — and the spectacular failures along the way.

This material was prepared for the CIVE 7397 Guest Lecture at the University of Houston. Many thanks to Prof. Ruda Zhang for the invitation, and to Hai Lu for several of the ideas that shaped this talk.

I didn't study CS in college. I was a management major in Beijing. Somehow I ended up at Yale for a CS master's, then at Uber building systems for 90 million users, then at Brex and Airbnb, and eventually started my own company.

I'm telling you this because the rules of who can build software are being rewritten right now — and your background might be more of an advantage than you think.

Act I: The Solo Grind

150 Lines Per Day Is the Ceiling

Every engineer starts the same way. Blank editor. Blinking cursor. A ticket that says "Build a subscription billing system."

A senior engineer — someone with ten years of experience — produces about 100 to 150 lines of production code per day. The rest is meetings, code reviews, debugging, context-switching. That's the ceiling.

The "10x engineer" was the myth we all chased. But even a 10x engineer was still one person. Productivity scaled linearly with headcount. Want to ship faster? Hire more people — each one takes three to six months to onboard.

And the worst part? Knowledge lived in people's heads. Why was that system designed that way? Ask Chen. Oh, Chen left. Good luck.

The Real Bottleneck: Brain Bandwidth

At Uber, the hardest part of any task was never writing the code. It was the research phase — figuring out where and what to change.

When the codebase is massive, the docs are gone, and the previous owner quit, you spend 80% of your time building a mental model of someone else's system. The bottleneck was always people — their availability, their context window, their bus factor. Not compute. Not ideas.

And then something showed up at the workshop door.

Copilot, Cursor, and the Rare Candy Effect

You discover Copilot. Then Cursor. Then Windsurf. Press Tab and entire functions materialize. It's like someone handed you a Rare Candy after years of manual grinding.

The gains are real — we have field studies now:

  • Microsoft & Accenture ran a randomized trial across 4,000 developers: 26% more merged PRs.
  • Cognition's Devin completes file migrations 10x faster than humans.
  • Junior developers saw +35% productivity gains; seniors got +8 to 16%.

But even with these gains, the ceiling is still you. You're faster at cutting wood, but you haven't built a factory. You're still the one reading specs, making decisions, debugging at 2am.

Rare Candy buffs you. It doesn't give you a Pokémon. And the only way to break through the ceiling is to remove yourself from the production line entirely.

Act II: Catching Your First Pokémon

From Typing Code to Writing Specs

This is the moment everything changes — and it's deceptively simple.

You write a spec. Not code — a spec. Acceptance criteria, constraints, edge cases. You hand it to an autonomous agent like Claude Code. You walk away.

The agent reads your codebase, plans its approach, writes code, runs tests, reads the errors, fixes them, loops. You come back to a pull request. You just caught your first Pokémon.

This is fundamentally different from Cursor or Copilot. Those are power tools — they boost your output. An autonomous agent is a separate worker. The critical skill shifts from prompt engineering to context engineering: designing the world your Pokémon operates in.

My Non-Negotiable Workflow

I always start in Plan Mode. The agent analyzes the codebase and proposes an approach. I review the plan, adjust it, then say "execute."

One rule I never break: "You debug it yourself. I only want results." The agent has to curl the API, read the logs, and write tests to prove its own work. If it can't verify itself, the spec isn't good enough.

Why Context Engineering Beats Prompt Engineering

You've caught your first Pokémon. How do you make it good?

Anthropic's own guidance says the quality of an agent depends less on the model itself and more on how its context is structured and managed. The model is the engine. The context — specs, codebase structure, feedback signals — is the skill book. What you teach it determines how well it fights.

Three inputs matter:

  • Specs. Write clear specifications with acceptance criteria before the agent writes a single line of code. A vague spec gets vague code. A precise spec gets working software.
  • Codebase. Structure your repo so the agent can navigate it — clear file naming, clean module boundaries, up-to-date docs. The agent reads your code the same way a new hire would on day one. If a new hire would be lost, your agent will be lost.
  • Feedback signals. Tests, type checkers, linters. Without feedback, your Pokémon will confidently produce garbage and tell you everything's fine. We've all had coworkers like that.

Defects at Scale: Building the Inspection Line

Your Pokémon wrote code. It compiles. You feel great.

Then you run the tests. Half fail. The agent hallucinated an API endpoint that doesn't exist, used a deprecated library, and introduced a subtle race condition.

This is the central challenge: a Pokémon without quality control manufactures defects at scale. The most important thing you build is not the production system — it's the inspection line.

The agent operates in a tight loop: write → test → fail → read error → fix → repeat, until every check passes green. The magic isn't perfect output on the first try — it never does that. The magic is that the feedback loop runs in seconds, not hours.

My inspection line in practice:

  • Backend: the agent curls the actual API and verifies responses.
  • Frontend: Playwright MCP — the agent opens a real browser, navigates the UI, clicks buttons, and verifies rendered output.
  • Every task: the agent writes its own tests as a deliverable.

The teams getting real value from agents aren't the ones with the best models. They're the ones with the tightest inspection lines.

From One Pokémon to a Full Party

One Pokémon handles one bounded task. Real software projects have many moving parts. You need a party — and for a party to work, you need shared tooling and a shared playbook.

MCP (Model Context Protocol) is the item bag. Any Pokémon can reach in and grab any tool, any API, any data source. It gives your agents hands.

CLAUDE.md and custom skills are the trainer's manual. Custom slash commands — /today, /blog, /ci — encode repeatable combo moves. CLAUDE.md is the rulebook every agent reads on startup: same context, same standards, no babysitting required.

As Anthropic advises: find the simplest solution possible, and only increase complexity when needed.

Your party is assembled. Everything is running. It looks beautiful on the whiteboard. Then it breaks.

The Abyss: When Everything Breaks

The Silent Failure That Shipped

The most dangerous failure isn't the loud one — it's the silent one.

I had a coding agent make changes that passed all existing tests, looked correct in review, and shipped. Days later, I discovered it had broken a subtle invariant that no test covered. No error logs. No crash. Just wrong behavior that took days to trace back to the agent's commit.

That's the nightmare scenario: a Pokémon that produces defective work that passes inspection. Your inspection line has blind spots, and the agent will find every single one.

The Research Confirms It

This isn't just my experience. A NeurIPS 2025 study analyzed 1,600 execution traces across seven multi-agent frameworks and found:

  • Failure rates of 41% to 87% across frameworks.
  • 14 distinct failure modes identified.
  • Coordination breakdowns were the #1 category at 36.9% of all failures — agents losing context during handoffs, contradicting each other, going in circles.

Why Adding More Agents Makes It Worse

Your instinct after a wipeout: "I need more agents." That instinct is wrong.

Google DeepMind and MIT tested this rigorously — 180 configurations, 5 architectures, 3 model families:

  • A centralized orchestrator improved performance by 80.9% on parallelizable tasks.
  • But all multi-agent setups degraded performance by 39–70% on sequential work.
  • Gains plateau at 4 agents. Beyond that, you're paying coordination tax with no return.
  • Uncoordinated agents amplify errors 17.2x. Even with a coordinator: 4.4x.

The lesson: don't add Pokémon. Add the right Pokémon.

Act III: Rebuilding Smarter

Four Principles That Survived Every Explosion

The naive optimism is gone. In its place: hard-won knowledge.

The SWE-Bench leaderboard evaluated 80 unique approaches to agentic coding and found no single architecture consistently wins. But four principles held up:

  1. Inspection over production. Your team wiped because unchecked errors cascaded. The fix isn't stronger Pokémon — it's better inspection gates.
  2. Context beats model. Agents didn't fail because models were weak. They failed because they lacked context. Better skill books beat better engines every time.
  3. Start with one. Gains plateau at four agents (per DeepMind/MIT). Start simple. Add agents only when forced to.
  4. Co-learn with AI. Don't just assign tasks — ask agents to audit your codebase, research best practices, and update CLAUDE.md. Every conversation makes the next one better.

A practical note on costs: you don't need a fortune to start. Claude.ai free tier, GitHub Copilot student plan, and Cursor free tier get you surprisingly far. I run my entire operation on multiple $200/mo subscriptions with a CLI-to-API proxy — roughly 1/7 to 1/10 the cost of raw API calls.

What One Person's Gym Actually Looks Like

This is not a metaphor. This is my literal setup today:

  • 10 Claude Code agents running in parallel across 4 Macs and 6 screens.
  • 5 agent writers producing SEO content 24/7 through an automated yarn blog loop.
  • 1 person running a startup that would have needed 10–15 people two years ago.

Here's how a typical day works:

  • Morning: I run /today. An agent reviews my TODO.md, checks what's in progress, and proposes priorities.
  • Workday: I dispatch tasks to 10 coding agents, each with a bounded spec. While they work, I review PRs and make architecture decisions.
  • Background: Five agent writers run continuously — writing, editing, publishing. I review during breaks.
  • Bug fixes: GitHub Copilot handles small, bounded tasks — quick fixes, adding test coverage.
  • Every six months: Roadmap and OKR planning — irreducibly human, but even that I do with Claude, Gemini, and ChatGPT to reach a quorum.

Six Rules for Training the Army

Two years of running this system gave me six rules. All from painful experience:

  1. "You debug it yourself." The agent curls the API, searches logs, writes tests. If it can't self-verify, the spec needs work.
  2. Tokens consumed = efficiency. The only metric: how many agents can I keep busy simultaneously? Idle agents are wasted capacity.
  3. Work without supervision. The best agents don't wait for assignments. Cron jobs. Infinite task loops. See something that needs doing? Do it.
  4. Architecture = freedom to fail. Good architecture contains the blast radius. Agents can experiment but can't break what matters.
  5. Measurable, improvable, composable. If you can't measure a capability, you can't improve it. Everything should be testable and combinable.
  6. Use agents for everything. Not just code — content, video, social media, customer support, calendar. Then: build tools for agents, not just for humans.

What Makes a Gym Leader

The DORA Gap: Individual Gains, Zero Organizational Improvement

Here's the uncomfortable truth. The DORA 2025 Report — Google's annual study of software delivery — found that while 80% of individual developers report AI productivity gains, organizational delivery metrics show no improvement. AI amplifies existing quality. The Pokémon doesn't fix the strategy.

The Pokémon handles commodity work: boilerplate, tests, spec-to-code translation, docs, well-defined bugs. That stuff is getting cheap fast.

The trainer handles the hard stuff: defining what to build and why. Designing testable systems. Writing specs worth translating. Making architecture decisions under uncertainty.

The Four Skills That Won't Get Automated

  • Context engineering — designing the skill books your Pokémon learn from.
  • Evaluation design — building the inspection line. If you can't evaluate output, you can't run a gym.
  • Systems thinking — understanding where defects cascade. Pokémon do local optimization; trainers do global coherence.
  • Product taste — when anyone can build anything, the question becomes what's worth building.

Why Non-CS Backgrounds Have an Edge

People with CS backgrounds tend to be conservative at the edges of what agents can do. They know too much about what should be hard, so they self-censor. "There's no way the agent can handle distributed transactions." They never ask.

People without CS backgrounds use their imagination. They say "what if I just told it to do this?" and discover it works far more often than experts expected. They push boundaries because they don't know where the boundaries are.

That was me. I didn't know what was "supposed" to be hard, so I tried everything. That's how I built a system that people with ten years more experience hadn't attempted.

The Paradigm Shift: Three Pillars

Everything in this post points to something bigger — a fundamental shift in how software gets built.

Using AI as "fancy autocomplete" is like bolting an electric motor onto a steam engine. You get a little more power, but you're stuck with the old architecture. The real revolution is tearing the steam engine out entirely.

Pillar 1: AI-first design. Stop asking "how can AI help my workflow?" Start asking "what obstacles can I remove so AI can do the work?" This mindset separates trainers who get 2x gains from those who get 100x.

Pillar 2: Closed-loop iteration. Remove humans from the execution loop. Let AI iterate autonomously with full environment access. Extending reliable autonomy from minutes to hours is the trillion-dollar question — every improvement unlocks exponential gains in what one person can build.

Pillar 3: Harness engineering. Humans define boundaries. Decouple architecture into minimal components. Use multi-agent cross-validation. You're not writing code — you're designing the harness that keeps the system honest.

Q&A from the Lecture Hall

These are real questions from students and practitioners after the lecture.

Q: What does your actual machine setup look like? Do you need a powerful server?

Not at all. I run Claude Code locally on my Mac — it talks to the API, so the heavy compute is in Anthropic's cloud. For isolation and sandboxing (so agents can't accidentally touch my main environment), I also run Claude Code inside Cloudflare sandboxes. Local machine for interactive work; sandboxed environment for anything that needs blast-radius containment.

Q: You mentioned using Claude Code for everything. Literally everything?

Yes. Code, blog posts, social content, email drafts, data analysis, calendar planning, customer support templates. If it's digital work with describable output criteria, I try to route it through an agent first. The question I ask before doing anything manually: "Could I write a one-paragraph spec for this?" If yes — try the agent.

Q: How do you keep agents running 24/7 without babysitting them?

Infinite loop: a bash loop that calls a Claude slash command, checks the exit condition, and re-runs. Each phase of a workflow gets its own skill — /brainstorm, /research, /write, /polish, /validate, /publish. When each skill is solid and self-verifying, you can chain them. If every link in the chain is reliable, the chain runs continuously. That's how five agent writers produce content around the clock.

The key insight: you're not running one long agent session. You're running many short, composable, inspectable steps. Short steps = short failure radius.

Q: Don't long-running agents time out or go off the rails?

Yes, which is exactly why I run multiple agents in parallel. Any individual agent might take 20–40 minutes on a complex task, hit a context wall, or stall on an unexpected error. Running parallel agents means one stall doesn't block everything. I treat agents like async workers in a queue, not like synchronous function calls.

Q: How do you handle routine versus complex tasks differently?

Routine tasks get a slash command. /ci, /blog, /today, /commit — these encode the full context, tools, and acceptance criteria once. Invoking them costs zero marginal thought. The skill is the spec.

Complex or novel tasks I direct personally: I write the spec, review the plan, approve the approach, then let the agent execute. I stay in the loop for what to build and why — not how to build it.

Q: What does this actually cost per month?

Under $1,000/month for one person running 10+ agents full-time. I use subscription-based access (Claude Max, similar tiers) rather than raw API — roughly 1/7 to 1/10 the cost of pay-per-token. Compare that to one junior engineer at $8,000–$12,000/month fully loaded. The economics are not close.

Q: When do you use the API versus a chat/agent product?

API for well-defined, high-volume, programmatic tasks: translation pipelines, structured data extraction, content transformations where I control the call. Predictable, auditable, composable.

Chat/Agent (Claude.ai, Claude Code) for complex, open-ended tasks: architecture decisions, debugging novel problems, writing that requires judgment. The agent needs to navigate ambiguity, read context, use tools — that's where the orchestration layer earns its keep.

Rule of thumb: if I can write the full prompt as a template with no surprises, use the API. If the task requires the agent to figure out what to do next, use the agent product.

Q: Does running more iterations always produce better results?

No — and this trips people up. More passes don't automatically mean better output. What matters is that each pass has a clear, different objective: draft → fact-check → tone → structure → final proof. Undirected "do it again" loops tend to regress toward average. Directed, inspectable phases with specific acceptance criteria — that's what produces compounding quality.

Aim for regular effort per phase, not marathon sessions. Reliable, inspectable, repeatable beats ambitious and unpredictable.

Q: What foundation should you build agents on? Isn't everything changing too fast?

Yes, everything is changing — which is exactly the strategy. My assumption: models and agents on the market are getting stronger every quarter. Anything you build on top of a stronger foundation gets stronger for free.

This means: don't bet on workflow orchestration engines (n8n, LangChain) that abstract away from the frontier. They lag the state of the art by design. Instead, build skills and wrappers on frontier agents: Claude Code, Gemini CLI, OpenCode. When the underlying model improves, your wrapper inherits the gain.

Build thin, close to the frontier. Avoid frameworks that freeze you to yesterday's capabilities.

Q: The agent industry is incredibly competitive. How do you stand out?

Don't compete on the agent itself — compete on what only you can bring to it.

Three patterns I see working:

  1. Researchers and academics: Your advantage is reputation and intellectual credibility. Build agents that extend your research impact — tools that let you publish, synthesize, and collaborate at 10x the rate. The agent amplifies a brand that took years to build.

  2. Domain experts: You know things about your field that general models don't. A surgeon using agents to analyze patient workflows, a supply chain expert automating procurement decisions — the agent is the amplifier, and domain knowledge is the moat. Solve problems better than anyone else in your vertical.

  3. KOL products: If you have a large, loyal audience — like Cuely's GTM built on high-volume public attention — distribution is the moat. The agent product becomes a funnel for trust you've already earned. Build in public, ship to the audience that already follows you.

The commodity is the agent. The defensible asset is what you bring to it.

Your First Quest

You started as a solo grinder — just you and a blinking cursor. You got Rare Candy and things got faster, but the ceiling was still you. You caught your first Pokémon, learned context engineering, built an inspection line, assembled a party — and watched it wipe spectacularly.

Then you rebuilt. Smarter. With constraints. With hard-won principles.

The Pokémon will keep getting stronger — new models, new protocols, new frameworks every quarter. But the trainer who designs the system, who decides what to build, how to inspect it, and when to ship it — that person doesn't get automated away.

That person can be you.

Tonight: pick one project. Write a one-page spec. Hand it to Claude Code. Review what comes back.

You just caught your first Pokémon.

The Context Stuffing Antipattern: Why More Context Makes LLMs Worse

· 9 min read
Tian Pan
Software Engineer

When 1M-token context windows shipped, many teams took it as permission to stop thinking about context design. The reasoning was intuitive: if the model can see everything, just give it everything. Dump the document. Pass the full conversation history. Forward every tool output to the next agent call. Let the model sort it out.

This is the context stuffing antipattern, and it produces a characteristic failure mode: systems that work fine in early demos, then hit a reliability ceiling in production that no amount of prompt tweaking seems to fix. Accuracy degrades on questions that should be straightforward. Answers become hedged and non-committal. Agents start hallucinating joins between documents that aren't related. The model "saw" all the right information — it just couldn't find it.

Six Context Engineering Techniques That Make Manus Work in Production

· 11 min read
Tian Pan
Software Engineer

The Manus team rebuilt their agent framework four times in less than a year. Not because of model changes — the underlying LLMs improved steadily. They rebuilt because they kept discovering better ways to shape what goes into the context window.

They called this process "Stochastic Graduate Descent": manual architecture searching, prompt fiddling, and empirical guesswork. Honest language for what building production agents actually looks like. After millions of real user sessions, they've settled on six concrete techniques that determine whether a long-horizon agent succeeds or spirals into incoherence.

The unifying insight is simple to state and hard to internalize: "Context engineering is the delicate art and science of filling the context window with just the right information for the next step." A typical Manus task runs ~50 tool calls with a 100:1 input-to-output token ratio. At that scale, what you put in the context — and how you put it there — determines everything.

The Action Space Problem: Why Giving Your AI Agent More Tools Makes It Worse

· 9 min read
Tian Pan
Software Engineer

There's a counterintuitive failure mode that most teams encounter when scaling AI agents: the more capable you make the agent's toolset, the worse it performs. You add tools to handle more cases. Accuracy drops. You add better tools. It gets slower and starts picking the wrong ones. You add orchestration to manage the tool selection. Now you've rebuilt complexity on top of the original complexity, and the thing barely works.

The instinct to add is wrong. The performance gains in production agents come from removing things.