Skip to main content

The Design of Everyday Things by Don Norman

· 36 min read

Everyday life is full of tiny frictions: a door that begs to be pulled but needs a push, a microwave that won’t start for reasons it won’t explain, a settings screen that hides the one option you need. The Design of Everyday Things shows why those frictions happen and how to remove them. Don Norman blends psychology and design to explain how people perceive, decide, act, and learn—then turns those insights into practical rules for making things that feel obvious and forgiving.

The core ideas are straightforward. Make possible actions easy to discover. Use clear signals to show where and how to act. Align controls with their effects so choices feel natural. Provide timely feedback so results are never a mystery. Share the mental load between memory and the world, using labels, shapes, and layouts that guide without instruction. Expect mistakes, distinguish slips from faulty plans, and build in constraints, warnings, and easy undo so errors are rare and recoverable. Embrace iteration: observe real users, prototype quickly, test, and refine. And remember that products live in markets; success depends on meeting human needs while surviving timelines, budgets, and the lure of feature creep.

Chapter 1: The Psychopathology of Everyday Things

Everyday objects can often leave us feeling inept and frustrated, from doors that won’t open the way we expect to light switches we can’t figure out. Don Norman argues that when people struggle with simple things like doors or stoves, the fault lies not in the user but in the design. Good design makes the possible actions and how to perform them obvious; Norman calls these crucial qualities discoverability (can you figure out what actions are possible?) and understanding (do you know how to execute those actions?). For example, a closed door should silently communicate whether you should push, pull, or slide it. If you see a flat metal plate on a door, you naturally push it; if there’s a handle, you instinctively pull. A sign reading “Push” or “Pull” is actually a sign of bad design – the door’s design itself should have been sufficient to signal what to do.

An illustration of a common "Norman door" problem: on the left, a flat plate clearly indicates the door should be pushed, and the user pushes it with no trouble. On the right, the door has a misleading pull handle on the side that actually needs pushing – the confused user pulls in vain, even though a small “Push” label has been added. Good design would eliminate the need for such labels by using appropriate cues (signifiers) so the correct action is naturally perceived.

Norman introduces a set of fundamental design principles – drawn from psychology – that help make things discoverable and understandable. These principles, when applied, act as a form of communication between the object and the user:

  • Affordances: The possible actions an object allows. For instance, a chair affords sitting (it invites that action). A door affords opening/closing. Users perceive affordances as relationships – e.g. a knob suggests turning because it affords grip and rotation.
  • Signifiers: The clues or signals that indicate where to perform an action. They can be deliberate markings, labels, or visual cues. A signifier tells you how to use an affordance. For example, a vertical handle signifies “pull me,” whereas a flat plate signifies “push here.” Good signifiers eliminate guesswork.
  • Constraints: Limitations that prevent misuse by reducing what can be done. For example, a USB plug can only fit in one orientation – a physical constraint. Constraints can be physical (the shape of pieces only allows one assembly), cultural (conventions like red meaning “stop”), semantic (the situation’s meaning suggests the right action, e.g. a windshield belongs in front of the driver), or logical (pure reasoning makes the choice clear, e.g. two switches for two side-by-side lamps should logically match the lamps’ positions).
  • Mapping: The natural relationship between controls and their effects. Good mapping means the layout of controls matches our mental model of what they affect. Imagine a stove where four burner knobs are arranged in the same square pattern as the burners – you immediately know which knob controls which burner. In contrast, bad mapping (like a row of switches for a random arrangement of lights) forces trial and error.
  • Feedback: Immediate indication of what action has been done and what result occurred. When you press a button and a light illuminates or a click sounds, that feedback reassures you the device received the command. Timely, informative feedback (not too little, not too much) is essential so users aren’t left wondering or repeating actions.
  • Conceptual Model: The user’s mental model of how something works. A good design gives clues that allow people to form a correct conceptual model of the system. For instance, a simple diagram on a thermostat showing the interior and exterior temperature can help users understand the heating system’s behavior. When the design’s “system image” (all the information the device presents to the user) aligns with the user’s mental model, people feel in control.

Norman emphasizes that these elements work together to make a product intuitive. When affordances and signifiers are used well, “you shouldn’t need a sign on a door,” because the design communicates what to do. The paradox of technology is that adding features gives us more power and functionality, but also makes devices more complex and confusing. A modern smartphone can do thousands of things, yet that power can overwhelm users if not designed with human needs in mind. Norman concludes this chapter by pointing out the design challenge: designers must reconcile the competing demands of adding new capabilities while keeping things simple and understandable. The solution is human-centered designdesigning for the way people actually are, not how we wish they were. By observing real users and respecting human psychology, designers can create everyday things that feel simple despite their inherent complexity.

Chapter 2: The Psychology of Everyday Actions

How do people actually go about using things, and where do they get tripped up? In this chapter Norman delves into the human mind – how we form goals, take action, and interpret results – to explain what designers should do to make execution and evaluation of actions easy. When you use any object or interface to achieve a goal, there are two gaps you must bridge. First, the gulf of execution: the gap between what you want to do and what the system allows or how it requires you to do it. Second, the gulf of evaluation: the gap between the action’s outcome and your ability to understand what happened. A well-designed product has a small gulf of execution (it’s clear how to do what you intend) and a small gulf of evaluation (it immediately shows you what happened, in a way you can understand). For example, suppose you want to print a document. If the print button is hard to find or the steps are convoluted, that’s a wide gulf of execution. If, after clicking print, nothing indicates whether it’s printing or where the file went, that’s a gulf of evaluation problem. Designers must minimize both gulfs – making options to act visible and logical, and providing prompt feedback to inform the user if their goal was achieved.

Norman describes seven stages of action that people (consciously or not) go through whenever they use something to accomplish a task. In simplified terms, we start by forming a goal (what we want to do), then plan and execute a set of actions, and afterwards we observe what happened and compare it to our goal. If anything in this cycle breaks down – say, you’re not sure what to do, or you can’t tell what the device did – the user will feel frustrated. For instance, Norman recounts a scenario where a group of intelligent people struggled to thread a film projector, growing increasingly confused and calling for help. The projector’s design failed to communicate its operation, creating a huge gulf of execution (unclear how to load the film) and gulf of evaluation (unclear if it was done correctly). Such examples show that even experts can be flummoxed by poor design.

Another key insight is that a lot of our interaction with everyday things happens at a subconscious level. We don’t consciously calculate every step when flipping a light switch or driving a car; through learning and repetition, many actions become automatic. Norman explains that human thought operates on three levels of processing: the visceral level, which is immediate and instinctive (quick gut reactions); the behavioral level, which governs routine actions and learned patterns (habitual responses, like typing on a keyboard without thinking of each letter); and the reflective level, which is conscious, deliberate thought (where we reflect, reason, and figure things out). Good design takes all three into account. For example, a visually pleasing layout might trigger a positive visceral response (it looks attractive or trustworthy), while a logical control scheme satisfies the behavioral level (it “just feels natural” as you use it), and meaningful feedback or functionality appeals to the reflective level (you appreciate what it can do and would recommend it to others). Norman stresses that enjoyment and success with a product require design harmony at all levels – it must feel right, work right, and make sense upon reflection.

Crucially, Norman points out that when things go wrong, people tend to blame themselves, not the design. If you can’t get a faucet to work or repeatedly set the wrong time on a cooker, you might think “I’m just stupid” or “I must be doing something wrong,” when in fact the interface is poorly designed. This phenomenon is sometimes called learned helplessness – after enough failures, users assume the problem is their own fault. Norman urges designers to realize that human error is usually a result of bad design, not human stupidity. Rather than expecting users to adapt to confusing interfaces, designs should accommodate the way people actually think and behave. In short, the task of the designer is to bridge the gulfs and make sure that at each stage of action, the user knows what to do and can tell what happened. By aligning with natural human psychology – our tendencies to form stories about what we see, our limited attention spans, and our mix of automatic and deliberate thinking – design can empower users instead of making them feel inept.

Chapter 3: Knowledge in the Head and in the World

This chapter explores where the information needed to use something resides: do we carry it all in our heads, or can it be embedded in the world around us? Norman explains that knowledge exists partly in our minds and partly in the environment, and a good design finds the right balance. For instance, consider how we handle everyday tasks like dialing a phone number. Decades ago, people memorized many phone numbers (knowledge in the head). Today, smartphones and contact lists do the remembering for us – the number is stored externally, and we just tap a name (knowledge in the world). Because the phone provides the needed info, we don’t have to learn or recall it. In general, whenever knowledge needed to perform a task is readily available in the world, we can rely less on memory and avoid burdening our brains.

Norman notes that precise, error-free behavior can emerge from imprecise knowledge because of helpful cues and constraints around us. In fact, you don’t need to memorize every detail if the world structure guides you. He gives four reasons why we can do the right thing without perfect knowledge in our head: (1) Both internal and external knowledge work together – we use a bit of memory and a bit of observation. (2) We usually don’t need absolute precision – often it’s enough to distinguish the correct option from the others (for example, recognizing your car key among others by shape without recalling the exact pattern). (3) The world provides natural constraints – the physical reality limits what’s possible, so you can’t easily do the wrong thing (a plug won’t fit into the wrong socket, so you don’t need to remember which way it goes). (4) We have cultural conventions that live in our head, which further narrow down choices (like knowing red means stop, green means go, or that an arrow pointing upward likely means “up” or “open”). These factors mean that not all the knowledge for precise action has to be stored internally – some of it is distributed between head and world.

He also distinguishes between two kinds of knowledge: declarative knowledge (knowledge of facts and rules – things you can write down or verbalize) and procedural knowledge (knowledge of how to do things – skills and actions, often subconscious). For example, knowing the route to drive to work is procedural (you might find yourself “just doing it” without reciting the directions), whereas knowing the street names and distances is declarative. Procedural knowledge is learned through practice and usually hard to fully explain in words; it resides in your head as muscle memory or habit. Declarative knowledge can be looked up or written (think of an instruction manual or a checklist – that’s knowledge in the world that supplements your memory). Good design can offload declarative knowledge into the world so we don’t have to memorize it, and can allow users to build procedural knowledge through consistent, understandable operations.

Norman emphasizes that knowledge in the world includes all the clues and information that a device or interface presents to us. Signifiers, physical constraints, and natural mappings are examples of knowledge in the world – they provide reminders or hints at the right time. A simple example is a road sign: you don’t have to remember the speed limit if there’s a sign posted; the environment is telling you. Or consider a well-designed car dashboard: each control is labeled or shaped uniquely (knowledge in the world), so you don’t rely purely on memory to find the headlights switch. Meanwhile, knowledge in the head includes things like remembering that on a computer Ctrl+Z is “undo” (once you’ve learned it, you can use it without any external prompt). There’s always a trade-off: if we force users to memorize too much, they’ll make errors or avoid using features; if we put everything in the world (like overusing on-screen instructions or labels), it can clutter and complicate the design. The best approach is to simplify tasks by building knowledge into the interface – for instance, a good stovetop uses design (such as burner knobs aligned with burners) so you know which knob to turn without consulting a diagram. By cleverly combining knowledge in the head and world, designers let people behave precisely and confidently with only minimal memory load. The takeaway: never make people remember what the world (or the device) can show or remind them, but also design the world so that what it shows is easily understood and fits with what people already carry in their minds.

Chapter 4: Knowing What to Do: Constraints, Discoverability, and Feedback

Even when you encounter a brand-new gadget or an unfamiliar situation, a well-designed product should guide you toward the correct usage. In Chapter 4, Norman focuses on how constraints and other design features provide built-in guidance for users. When you’re not sure what to do with something, constraints narrow the options and prevent many potential errors. There are four kinds of constraints designers can exploit:

  • Physical constraints: These are limitations imposed by the object’s physical reality – essentially, what’s physically possible. They immediately rule out wrong actions. A classic example is a puzzle piece or a plug that can only fit in one orientation. If you’ve ever tried to insert a memory card, you’ll notice it only goes in one way; a groove or asymmetric shape acts as a physical constraint. Likewise, a round peg won’t go into a square hole. Physical constraints are the most obvious and hard to circumvent – they directly stop you from doing the wrong thing.
  • Cultural constraints: These rely on learned conventions and social norms. We learn from our culture that certain symbols or behaviors are acceptable in certain contexts. For example, in most cultures a red traffic light means “stop” and green means “go” – that’s a cultural constraint guiding driver behavior. On a computer interface, a trash can icon culturally suggests “delete” because we’re used to that metaphor. Cultural constraints are not physical laws, but breaking them will confuse or upset users (imagine a video game where pressing the “Save” icon actually deleted your progress – it violates what we culturally expect the icon to do).
  • Semantic constraints: These come from the meaning of a situation. Even without rules or physical limits, the purpose of objects and our understanding of a scene suggest the right action. Norman gives the example of assembling a motorcycle: the rider’s windshield obviously must go in front of the rider (to block wind), not behind them. The semantics (the purpose and context) constrain how you put it together. In everyday life, semantic constraints mean we use common sense reasoning – if you see a teapot, you know the spout should point away from you when you pour, or you’ll spill hot tea on yourself. The meaning of the design’s elements guides proper use.
  • Logical constraints: These are limitations derived from pure reasoning – often using process of elimination or consistency. If there are four knobs and four burners, and you’ve figured out which three knobs correspond to three of the burners, logically the remaining knob must control the last burner. Or if a device has a panel that opens with two screws and you have two screws of different lengths, a logical constraint might be that the longer screw goes where the material is thicker. Logical constraints let users reason out the correct action when other cues are absent. For example, many remote controls have a directional pad with up/down/left/right buttons; it’s logical that pressing “up” moves the selection up on the TV menu. If it didn’t, you’d sense something was wrong because it violates logic.

By thoughtfully using these constraints in design, one can often make the set of possible actions small enough that a user’s natural intuition or a bit of deduction will reveal the correct choice, even if they’ve never seen the device before. Norman shows how affordances and signifiers work hand-in-hand with constraints to enhance discoverability. Consider the common problem of a row of identical light switches by a door: You walk into a room with three switches; which one is for the lights, which for the fan, which for the outdoor light? Without labels or logical arrangement, you’ll resort to trial and error. A better design might use a logical constraint – e.g. placing the switches in the same order as the lights they control (left switch for the leftmost light, etc.) – so the mapping is apparent. Or use signifiers, like different toggle shapes or an icon on the switch, to signify function. Another example is the infamous “Norman door” scenario from Chapter 1: a door that affords pushing or pulling should have signifiers (like a plate or handle) that constrain your action to the correct one. A well-designed door won’t allow the wrong action – you wouldn’t put a pull handle on the push side of a door, because that invites the wrong behavior. In short, constraints gently force the desired behavior by making wrong actions impossible or unlikely.

Norman also discusses forcing functions, a special class of constraints that force a necessary action before allowing progress. A common example is the car that won’t start unless you press the brake, or a microwave oven that won’t run if the door is open (the door acts as an interlock, cutting power unless shut). Forcing functions are powerful because they can prevent serious errors (you can’t drive off without your seatbelt if the car loudly reminds or refuses to shift gears until you buckle up). However, they must be used carefully – if too intrusive, they annoy users; if too subtle, they fail to stop the error.

Finally, feedback makes another important appearance here. Norman reiterates that even after you figure out what to do (thanks to affordances, signifiers, and constraints), you need to know that you did it correctly and what happened. Good design provides clear feedback for every action. Imagine pressing an elevator button: if it silently does nothing, you might press it repeatedly. But if it lights up and you hear a chime, you immediately know the call went through – that’s feedback confirming your action. In complex systems, feedback might include progress bars, success messages, or even subtle cues like the sound of a latch clicking closed. This chapter drives home that discoverability (knowing what to do) and feedback (knowing what happened) form a continuous loop. When both are well-designed, users rarely get stuck, and even if they do momentarily, they can course-correct quickly. In essence, ## Chapter 4 teaches that if designers leverage constraints and signals effectively, they can make new or complex tasks feel intuitive – the user finds the right action, performs it, and gets immediate confirmation, all without cracking a manual.

Chapter 5: Human Error? No, Bad Design

People make mistakes – that’s inevitable. But Norman’s provocative message in this chapter is that “human error” is usually a misnomer; it’s more often *design error*. When a person using a device does something wrong or harmful, we shouldn’t rush to blame the person’s incompetence. Instead, we should ask: How did the design allow, or even encourage, that error? If a pilot switches off the wrong engine in an emergency, or a homeowner sets the house alarm incorrectly, those situations often indicate that controls were confusing, information was misleading, or the system didn’t properly warn against the slip. Norman flatly states: learn to see any human mistake as a symptom of poor design, not human stupidity. This shifts the responsibility onto designers to anticipate errors and build systems that are resilient to them.

One way to do this is to study errors systematically. Norman describes techniques like root cause analysis – asking “why” repeatedly to drill down to the fundamental cause of a failure. For example, if a hospital patient receives the wrong medication, asking “Why?” might lead you from “the nurse administered the wrong drug” (surface cause) to “the two drugs had similar names or packaging” to “the labeling design was confusing” – ultimately revealing a design fix (change the labels or storage system to make mix-ups impossible). Norman mentions the “Five Whys” method of root cause analysis: you typically have to ask why at least five times to get past blaming human error and uncover how the system or design set the stage for that error. The lesson is that we often stop too soon in our analysis – we blame the user performing the action, rather than the latent design flaws that permitted the mistake.

Norman then breaks down different types of errors. He classifies them broadly into slips and mistakes. A slip is when you have the right goal and intention, but you accidentally do something wrong while executing it. For instance, you intend to hit the Save button but click Delete by accident – that’s a slip. Slips often happen when we’re on “autopilot,” and something about the interface lets the wrong action happen too easily (e.g. two buttons too close together, or an “undo” shortcut that’s next to a “delete all” shortcut). A mistake, on the other hand, is when your goal or plan itself is wrong – you thought you were doing the right thing, but your understanding was flawed. For example, you program your thermostat incorrectly because you misunderstood how its schedule works. Mistakes usually result from a poor mental model, unclear instructions, or complex systems that lead the user down the wrong path. Novice users tend to make more mistakes (they may not know what they’re supposed to do), whereas expert users more commonly make slips (they know what to do but goof in execution). Both types of errors matter, and design can help prevent both: to reduce slips, make the interface forgiving and unambiguous; to reduce mistakes, make it easy for users to understand what they should be doing (good signifiers, clear conceptual model).

After dissecting why errors happen, Norman offers several design strategies to mitigate errors:

  • Add constraints to prevent errors: As we saw in Chapter 4, clever constraints can physically or logically block incorrect actions. For example, designing connectors that cannot be plugged in upside-down, or graying out menu options that don’t apply in a given context so the user can’t select them.
  • Use sensibility checks: The system should double-check if an action makes sense before executing it. If a user attempts something obviously abnormal – like deleting an important file or scheduling a meeting for February 30th – the software can catch it and ask “Are you sure?” or prevent it outright. These are like a sanity filter to catch simple slips (e.g. a phone warning you if you dial an incomplete number).
  • Allow “undo” and reversibility: Perhaps the single most user-friendly error solution is the ability to easily undo an action. If you delete a document by mistake, a well-designed system would let you retrieve it from a recycle bin. Norman stresses making actions reversible whenever possible, which turns many serious errors into minor detours.
  • Require confirmations for destructive actions: If an action is irreversible, the design should ask for confirmation (or even a double-confirmation) before proceeding. For instance, when formatting a hard drive or sending an email to a large group, a confirmation dialog (“Are you sure? Y/N”) gives the user a second chance to reconsider. However, Norman also cautions that too many confirmations can annoy users; they should be saved for truly critical actions.
  • Make errors easy to detect and diagnose: If a mistake does happen, the system should make it obvious what went wrong – not bury the error or use cryptic codes. Good designs provide clear error messages or visual cues to highlight the issue, guiding the user to fix it. For example, if you leave a form field blank, a helpful interface will highlight it and maybe even say “Please enter your phone number,” rather than just throwing a generic error.
  • Help users correct errors gracefully: Rather than treating an error as a dead-end (“Error – you did it wrong”), treat it as part of the interaction. For instance, if someone misspells a search query, a smart design will suggest “Did you mean ...?” instead of giving zero results. Norman suggests thinking of the user’s action as an approximation of what they want, which the system can often interpret or adjust, guiding the person to success. In other words, design with empathy: assume the user wants to do the right thing, and help them get there.

Norman illustrates these principles with real-world cases. He references safety systems like Toyota’s practices (where any worker on an assembly line can pull a cord to stop the process if they spot a problem, encouraging error reporting and quick fixes – a concept called Jidoka in Lean manufacturing). He also discusses how industries like aviation investigate incidents not to shame pilots but to improve cockpit design and checklists. A striking concept introduced is the “Swiss cheese model” of error: many small flaws have to align for a catastrophe to happen, so adding layers of defense (like multiple constraints and feedback mechanisms) makes it far less likely for all holes to line up. The overarching message of Chapter 5 is empowering: people will always err, so designers must build systems that anticipate errors, prevent the trivial ones, and cushion the impact of the rest. By doing so, we shift the narrative from “user error” to “design learning,” continuously refining products to be safer and more foolproof.

Chapter 6: Design Thinking

In previous chapters, Norman identified problems and principles for designing everyday things. In Chapter 6, he takes a step back and asks: how do we actually come up with good designs in the first place? The answer he provides is Design Thinking – a human-centered, problem-solving approach that designers should use to address user needs. The first and most important step of design thinking is to make sure you’re tackling the right problem. As Norman wryly observes, it’s all too common to invest huge effort solving the wrong problem – to design a brilliant solution to something that nobody really needed fixed. Therefore, designers must spend time understanding the real issues users face. “What is the underlying problem here?” is the key question. Norman urges an emphasis on problem framing: before jumping to solutions, you have to figure out if the question you’re asking is the correct one. Sometimes this involves stepping back, observing users in their natural environment, and asking those “Five Whys” (from Chapter 5) to uncover root needs.

The Human-Centered Design process (HCD) that Norman describes is inherently iterative. It typically includes stages such as Observation (researching how people actually behave and what they struggle with), Ideation (brainstorming as many potential solutions or approaches as possible), Prototyping (building quick and cheap models of your ideas), and Testing (trying those prototypes with real users to see what works and what doesn’t). Importantly, this is not a one-shot sequence but a cycle: after testing, you often go back to observe more or refine your concept, then prototype again, and so on. Norman highlights that this iterative loop is crucial for refining a design so that it truly meets human needs and is usable. It’s rare to get a complex design perfect on the first try – feedback and refinement are part of the journey. He also contrasts activity-centered design with strictly task-centered design. Instead of focusing only on single tasks in isolation, designers should consider broader user activities and goals, which gives more context and can inspire better solutions (for example, designing a kitchen not just around the “task” of chopping vegetables, but around the whole activity of preparing a meal and the flow between tasks).

Norman introduces the idea of the Double Diamond model of design (a concept named for the shape of the process diagram): the first “diamond” is about diverging to discover the real problem (exploring, researching widely, and then converging by defining the specific challenge), and the second “diamond” is about diverging to develop solutions (brainstorming many ideas) and then converging again by refining and choosing the best solution. This visual model reinforces that design thinking isn’t a straight line – it’s an expansion and contraction of ideas and understanding.

However, after laying out this ideal process, Norman gives a reality check: “What I just told you? It doesn’t really work that way.” In the messy real world of business and deadlines, designers often cannot perfectly follow the textbook human-centered process. Projects have fixed ship dates, budgets, and legacy constraints. He cites Norman’s Law (half in jest but true in practice): the day the product development process starts, the team already feels behind schedule and over its budget. In other words, real design happens under pressure. Teams may have to skip steps, make compromises, or freeze the design before it’s fully refined, simply due to external constraints. Recognizing this, Norman discusses the design challenge of working within multiple constraints (time, cost, technology limits, business goals). A designer must often balance conflicting requirements: maybe users want a device with lots of features, but more features make it harder to use (feature creep vs. simplicity). Perhaps adding accessibility for one group makes the product less sleek for another market. Or a brilliant idea might be too expensive to produce at scale.

One poignant example Norman gives is designing for special populations, such as the elderly or disabled. There can be a “stigma problem” where products made specifically for, say, people with mobility issues end up looking unattractive or stigmatizing, so those who need them feel embarrassed to use them. The design challenge is to meet these special needs in a way that doesn’t alienate or single out users – ideally, making products inclusive so they work for everyone (universal design).

Norman also has a counterintuitive point: complexity is not the enemy – confusion is. We often hear that things should be “simple,” but in reality many tasks are complex. A smartphone, for example, is complex because it does so much. Norman argues that complexity can be good if it is well-organized and matches the user’s goals. We shouldn’t dumb things down to the point of uselessness; instead, we should strive to design complex systems that feel straightforward. The complexity should be behind the scenes, with the interface guiding users through it. What frustrates people is not that a system can do a lot, but that they can’t figure out how to make it do what they want (that’s confusion). A well-designed car has hundreds of functions and indicators (quite complex), yet a good dashboard and user manual can make operating the car second nature.

Another important aspect covered is standardization and consistency. When users interact with many devices and platforms, consistent design conventions greatly help. Imagine if every car had the gas pedal and brake swapped – driving would be dangerous chaos. Because car controls are standardized (to a large extent globally), once you learn to drive one car, you can drive others. Norman encourages the use of standards, templates, and common idioms in design. However, he also notes the challenges: sometimes standards take so long to emerge that technology moves past them (for instance, attempts to standardize certain smartphone hardware buttons became irrelevant once touchscreens took over). And occasionally, a standard never gains traction because it wasn’t widely adopted (the chapter mentions the curious case of digital clocks – the “standard” of showing time in digits hasn’t fully replaced analog clocks, partly because people still find analog dials useful and meaningful). The key is to know when to adhere to familiar conventions and when to innovate; breaking a well-established mental model can hurt usability, but introducing a better standard can advance an entire industry.

In sum, Chapter 6 is about the mindset and process of design. It urges designers to be problem-finders as much as problem-solvers, to keep real people at the center through iterative development, and yet to remain pragmatic in the face of real-world constraints. Norman essentially says: design thinking is a way to creatively and systematically solve the right problems, but don’t be dogmatic about process – stay flexible and aware of business realities. By combining empathy for users with an understanding of practical constraints, designers can navigate the chaos and still produce brilliant, user-centered solutions.

Chapter 7: Design in the World of Business

In the final chapter, Norman shifts focus to the broader context in which design happens: the business and market environment. No matter how great a design is conceptually, it ultimately has to succeed in the real world of competition, budgets, and evolving technology. Norman begins by examining the pressures that companies and design teams face, and how those pressures can shape (or misshape) the products that get made.

One issue he highlights is competitive forces leading to “featuritis” – also known as feature creep. When multiple companies compete, there’s a temptation to one-up each other by adding more features to their products: if Brand A’s toaster has 3 modes, Brand B adds a fourth mode plus a clock; then Brand C adds Wi-Fi, and so on. Existing customers also often ask for more capabilities. Over time, a once-simple product can become overcomplicated and less usable because it’s accreted an overload of features. Norman notes that this usually happens for understandable reasons (companies chase new selling points, or fear missing out on a checkbox that competitors have, and loyal users always want a little more). However, the result can be products that try to do everything and end up doing nothing well. The lesson for businesses is focus: adding features should not come at the expense of core usability or a clear identity. Sometimes it’s better to say no to feature requests and refine what truly matters. Norman suggests that success can breed failure if a product loses its original elegance by bloating out – designers and managers must guard against this trap by remembering that more is not always better. In practical terms, that might mean choosing to excel at a few key features rather than having dozens of mediocre ones.

Next, Norman discusses how technological change forces design change. New technologies can disrupt markets quickly, and companies feel pressure to jump on trends. But innovating without considering users is risky. A theme here is incremental vs. radical innovation. Incremental innovation is the step-by-step improvement of products – it’s less glamorous, but it’s the bread and butter of most progress (think of each year’s smartphone model: a slightly better camera, a slightly faster processor). Radical innovation, in contrast, is the big leap – a novel product or paradigm that may change everything (for example, the first iPhone’s touchscreen-only design was a radical departure from the physical-key phones of its time). Radical innovations are rare and carry high risk (many fail or arrive before the market is ready), but when they succeed, they can redefine industries. Norman gives the example of Apple’s iPhone as a successful radical innovation – it went against the prevailing logic (no physical keyboard in an era when BlackBerry’s keyboard was seen as essential) and yet proved hugely popular. The point for designers and businesses is that being first or being radical isn’t enough; you must ensure the new innovation actually fits human needs and contexts. Some radical ideas have flopped because they didn’t consider real user behavior, while some incremental tweaks have triumphed by elegantly meeting users’ day-to-day needs.

Norman also asks: How long does it take to introduce a new product? Sometimes much longer than we expect. There’s a look at historical cases like the videophone – an idea from the late 19th century that took well over a century to truly materialize in everyday life (and even now, video calling only became ubiquitous when smartphones and the internet converged to make it effortless). Another case is the QWERTY keyboard layout, which was introduced in the 19th century and became so standard that even better layouts couldn’t displace it. These stories illustrate a sobering fact: good design alone doesn’t guarantee adoption. Timing, cost, social acceptance, and network effects (everyone else is using QWERTY, so you will too) all influence whether a design succeeds in the market. Designers working in business need to understand these dynamics – sometimes the best design might lose to a “good enough” design that got widespread adoption first or fits easier into the current ecosystem.

One section titled “The Design of Everyday Things: 1988–2038” has Norman reflecting on the future by looking 50 years ahead (from the original publication date). He muses on how technology might change in the decades to come and asks which principles will still hold. A reassuring thought he offers is that while technologies change rapidly, human psychology and our fundamental needs change much more slowly. In other words, the core lessons of the book – about making things understandable, usable, and human-centered – will likely remain relevant even as we move towards smart homes, AI assistants, or whatever 2038 holds. The specific gadgets may be different, but people will still want tools that don’t frustrate them and experiences that are pleasant and meaningful.

Norman doesn’t shy away from the ethical side of design either. He talks about the moral obligations of design – the idea that designers and companies have a responsibility beyond just making money. For example, adding “needless features” or creating new product models every year might be good for short-term sales, but it can be bad for the environment (e.g. e-waste from constantly discarded devices) and even bad for users who are forced to continually relearn or repurchase. Designers should consider sustainability and avoid change for change’s sake. There’s also an obligation to design inclusively, so that products help all kinds of people and do not exclude or disadvantage certain groups. Norman argues that doing the right thing ethically can align with long-term success: products that truly meet human needs and respect users tend to earn loyalty and positive word-of-mouth.

Finally, Norman ties everything together by reminding us that a design isn’t truly great until it succeeds out in the world. A quote encapsulates this: a design is only successful if people buy it, use it, and enjoy it – if no one uses your beautifully designed product, then by definition it has failed. This underscores the partnership between design and business: you might craft a wonderfully usable gadget, but you also need marketing, timing, and a receptive audience to make it real. Conversely, a purely marketing-driven product with poor design will eventually falter because users will be frustrated (and nowadays they’ll voice that frustration). Thus, the best outcome is when business goals and user goals align – companies prosper by delivering products that genuinely delight users. Norman encourages designers to gain at least a basic understanding of the business side: to communicate with marketers, engineers, and executives in terms of value and strategy, not just form and function. When designers know about sales, marketing, and production, they can better champion good design in terms that make sense to the whole team, ensuring usability doesn’t get lost in the crunch of a product launch.

In conclusion, The Design of Everyday Things ends on an optimistic yet challenging note. Norman has taken us from the nitty-gritty of door handles and error messages all the way to corporate strategy and future tech. The core message across all chapters is consistent: design for real people – understand how we think, what we feel, what we need – and remember that people (not technology for its own sake) should remain at the center. If a product is intuitive, forgiving, and delightful, it will not only avoid the “everyday psychopathologies” of bad design, it will likely succeed in the marketplace as well. The thoughtful, human-centered approach outlined by Norman is a timeless blueprint for anyone creating things, whether physical or digital, that are meant to be used by everyday folks. Each chapter’s insights build the case that great design is possible when we empathize with users, apply psychological principles, and never forget that even in a high-tech world, our satisfaction still hinges on those simple, human-friendly everyday things.

Let's stay in touch and Follow me for more thoughts and updates