Methodology Work Services Book Writing Make Me Think - chapters Methods Guide Notes Blog
Make Me Think · Chapter 9

Feel, Unpack, Diagnose, Prescribe

How expert diagnosis actually works, why feeling comes before thinking, and the four-step process that turns “something’s off” into a fix.

A client sent me a link to their homepage last year with the subject line “something’s off.” No other context. No analytics. No brief. Just a URL and three words.

I opened the page. My stomach tightened. My eyes couldn’t land anywhere. Dense text above the fold, a rotating carousel of stock photos, eleven navigation items, and a chatbot popup that appeared before the page finished loading. The overall feeling: I’m being talked at by someone who doesn’t realize they’re shouting.

That feeling happened in about two seconds. I hadn’t read a word of copy. I hadn’t checked the analytics. I hadn’t looked at the mobile experience. But I already knew the shape of the problem.

Here’s the thing: that two-second gut response wasn’t a hunch. It was a diagnosis. I just didn’t have the vocabulary for it yet.


The Doctor’s Glance

In medical education, there’s a concept called the doorway diagnosis. An experienced physician walks into an exam room, and before they ask a single question, before they touch a stethoscope, they’ve already formed a working hypothesis. Skin color, posture, breathing pattern, facial expression. The patient hasn’t spoken yet, but the doctor has already started narrowing the possibilities.

This isn’t magic. It’s pattern recognition running on a database built from thousands of prior patients. Ericsson and colleagues spent decades studying expert performance and found that what separates experts from novices isn’t speed of thought. It’s the size and organization of their pattern library (Ericsson, Krampe & Tesch-Römer, 1993). A cardiologist who has listened to ten thousand heart sounds doesn’t consciously compare each murmur to a textbook diagram. The recognition fires automatically. The pattern matches before the conscious mind catches up.

Gary Klein studied this across firefighters, military commanders, and intensive care nurses. His Recognition-Primed Decision model (Klein, 1998) showed that experts in high-stakes environments almost never compare options the way decision theory says they should. They don’t line up alternatives and weigh pros and cons. Instead, they recognize the situation as a variant of something they’ve seen before, mentally simulate the first plausible response, and act. The recognition happens fast. The deliberation happens after, as a check, not as the primary engine.

Design diagnosis works the same way. When I open that client’s homepage and my stomach tightens, that’s my pattern library firing. Fifteen years of looking at websites, thousands of pages across hundreds of clients, compressed into a gut signal that says: this page is fighting itself.

The feeling isn’t soft. It’s the fastest diagnostic instrument I have.


Why the Sequence Matters

Over those fifteen years, I’ve distilled the diagnostic process into four steps. They sound obvious when I list them. The sequence seems almost too simple to call a methodology. But the order is load-bearing. Skip a step, reverse two, or collapse them into one, and the diagnosis breaks.

The four steps: Feel. Unpack. Diagnose. Prescribe.

Each one operates in a different cognitive mode. Feel is pre-verbal. Unpack is analytical. Diagnose is forensic. Prescribe is architectural. They use different parts of the brain, and they need to happen in sequence because each step generates the input for the next one.

This isn’t just my personal preference for how to work. The sequence mirrors how expert reasoning actually operates. Croskerry (2009) mapped dual-process theory onto clinical diagnosis and found that effective diagnosticians oscillate between intuitive recognition (System 1) and analytical verification (System 2), but they start with intuition. The gut response narrows the search space. The analysis confirms or corrects it.

Starting with analysis, without an intuitive anchor, produces what Croskerry calls “premature closure,” locking onto the first measurable thing you find instead of the thing that actually matters.

In design, premature closure looks like this: someone opens the analytics, sees a high bounce rate, and immediately starts redesigning the hero section. They skipped feeling the page. They skipped unpacking which principle is violated. They jumped from symptom to treatment and missed the disease.


Step 1: Feel

I arrive at the page and I let the emotional response fire before language kicks in. Before I start naming things, before I look for specific problems, before I open DevTools or check the analytics dashboard. I just feel it.

That “something’s off” homepage? My Step 1 took two seconds. Stomach tightened. Eyes couldn’t land anywhere. The compressed signal my System 1 returned: “being shouted at.” I wrote that down. I didn’t name the eleven nav items or the chatbot popup yet. Not yet.

The vocabulary of Feel is simple and physical. “This makes me feel rushed.” “This makes me feel like I can’t find the thing I need.” “This feels cheap.” “This feels cold.” “This feels like nobody cared.”

Notice the language. “Rough.” “Heavy.” “Sharp.” “Warm.” “Sour.” When we describe first impressions, we instinctively reach for sensory words. This is not a quirk of casual speech. It is grounded in how cognition actually works.

Lakoff and Johnson (1980) laid the foundation: abstract concepts are structured through physical metaphors. We understand “heavy” topics, “warm” people, “rough” experiences because cognition is embodied. It runs on a body that has weight, temperature, texture. The metaphors are not decorative. They are the architecture.

The experimental evidence is striking. Williams and Bargh (2008) had participants hold either a warm or cold cup of coffee, then rate a stranger’s personality. Warm cup holders rated the stranger as having a “warmer” personality. Physical sensation directly influenced social judgment. Ackerman, Nocera, and Bargh (2010) extended this across multiple senses: the weight of a clipboard made résumés feel more “important,” rough textures made social interactions feel more “difficult,” and hard objects made negotiators more “rigid.” Haptic sensation shaped abstract evaluation, reliably, across studies.

This is why the Feel step works. When a designer says “this site feels heavy” or “that layout feels rough,” they are not being vague. They are accessing a perceptual system that maps physical sensation to abstract evaluation. Sensory language IS diagnostic language. It is the fastest path to what the unconscious evaluation actually detected.

The feeling itself is the diagnostic data. Not an interpretation. Not a theory about what’s causing it. The raw emotional response.

In an industry obsessed with data and measurable outcomes, “I felt a thing” sounds like it belongs in a therapy session, not a design review. But that feeling IS your System 1 processing the entire page simultaneously. It’s the predictive model I described in Chapter 2, running a full-stack pattern match against thousands of stored templates and returning a single compressed signal. “Off.” “Good.” “Wrong.” “Trustworthy.” “Sketchy.”

Damasio’s somatic marker hypothesis (1994) explains why this works. Emotional responses aren’t separate from rational decision-making. They’re integral to it. The body tags situations with feelings based on prior experience, and those tags function as rapid-fire heuristics that narrow the decision space before conscious analysis even begins.

Patients with damage to the ventromedial prefrontal cortex, the region that generates these somatic markers, can reason perfectly well in abstract terms but make catastrophic real-world decisions. They lost the feeling, and without it, they lost the ability to prioritize.

The feeling is the fastest diagnostic instrument you have. It’s also the most honest one, because it fires before your conscious brain starts rationalizing, making excuses, or looking for things to praise.

Emotional Primacy: Why the Feeling Arrives First

This isn’t a metaphor. Feelings are the fastest response system the central nervous system produces. They arrive before language can catch up, before your brain can put words to what it detected. That’s not a design choice I made for this methodology. It’s how human neurology actually works.

LeDoux (1996) mapped the pathway. When sensory input hits the thalamus, it splits. One route goes up through the cortex for detailed, conscious analysis. The other route, what LeDoux called the “low road,” shoots straight to the amygdala, bypassing the cortex entirely.

That low road delivers an emotional response in as little as 12 milliseconds. The cortical route, the one that gives you conscious awareness and the words to describe what you’re seeing, takes 200 to 300 milliseconds. By the time your conscious mind knows what it’s looking at, your emotional system has already responded.

Zajonc (1980) argued this from the behavioral side before the neuroscience confirmed it. His paper “Feeling and thinking: Preferences need no inferences” presented evidence that affective reactions can occur independently of, and prior to, cognitive processing.

Emotions are not post-cognitive. They are pre-cognitive. You don’t evaluate something and then feel about it. You feel about it and then evaluate it. The preference forms before the reasoning can justify or explain it.

Now compare this to language. Wernicke’s area, the region of the brain responsible for language comprehension, operates on a completely different timescale. Kutas and Hillyard (1980) discovered the N400 component, an electrical signature the brain produces when it encounters a word and tries to make meaning of it. That component peaks around 400 milliseconds after stimulus onset, with activity spanning roughly 200 to 500 milliseconds. That is the speed of language comprehension: a quarter-second to half-second for your brain to process a single word in context.

The gap between feeling and naming is neurologically real and measurable. Your amygdala responds at 12 milliseconds. Your language system responds at 200 to 500 milliseconds. That is a 10x to 40x speed difference.

This is why you laugh before you can explain why something is funny. Why you recoil before you can name what’s wrong. Why you feel a certain way about a webpage, a person, a room, before you can articulate it. The 50-millisecond verdict I keep returning to in this book isn’t just fast. It’s pre-linguistic. The feeling arrives first. Language follows.

Damasio’s somatic marker work (1994), already referenced above, converges on the same point. The body tags situations with feelings before the conscious mind reasons about them. Those somatic markers are the CNS’s shortcut, and they operate on a timescale that language cannot match.

This is what I mean by “50 milliseconds.” The Feel step is not about being imprecise or touchy-feely. It is about accessing the fastest, most honest evaluation system the brain has. By the time you can say what’s wrong, you’ve already felt it. The diagnostic sequence starts with Feel because that is where the data arrives first.

My ADHD is actually an advantage here. I feel cognitive load more acutely than most users. Friction that a neurotypical designer might walk past without noticing, a cluttered sidebar, an ambiguous label, registers for me as genuine discomfort. The same sensitivity that makes filling out government forms painful makes me a better diagnostic instrument. My perceptual system is tuned to a lower threshold. That’s a liability in daily life and an asset in design work.

The critical discipline in Step 1 is not acting on the feeling. Not yet. The feeling is data. It tells me something is wrong and points in a direction. But if I jump from “this feels cheap” straight to “let’s fix the typography,” I’ve collapsed four steps into one. I’ve skipped the unpacking, the diagnosis, and the requirements gathering. I’ll probably land on a reasonable fix. But I’ll miss the systemic issue underneath it.

Sit with the feeling. Name it. Write it down. Then move to Step 2.


Step 2: Unpack

“Why does this feel X?”

Now I’m looking for the broken rule. Not the specific element (though I’ll get there), but the underlying principle the page violates. Something made me feel rushed. Why? Something made me feel like the site was untrustworthy. What pattern is generating that signal?

This is the transition from System 1 to System 2. The gut response has narrowed the search space. Now the analytical mind works within that narrowed field instead of trying to evaluate everything at once. Klein (1998) found that experts who skipped this step, who acted purely on recognition without unpacking, made faster decisions but more errors. The experts who paused to mentally simulate their intuitive read before acting caught problems that pure recognition missed.

Back to that homepage. Why “being shouted at”? I started cataloging which principles were broken.

Don’t overload. Eleven navigation items, three competing calls to action, a rotating carousel, and a chatbot popup. The brain’s working memory holds roughly four chunks at a time (Cowan, 2001). This page was demanding twelve.

Every element beyond the limit becomes noise that the brain has to actively filter, spending processing bandwidth on things that don’t carry meaning. Hick’s Law (Hick, 1952) predicted this decades ago: decision time increases logarithmically with the number of choices. More options doesn’t mean more engagement. It means more paralysis.

A caveat: less doesn’t always mean more. In professional tooling, code editors, design software, analytics dashboards, information density IS the product. Kalyuga and colleagues (2003) documented the expertise reversal effect: instructional designs that help novices can actively hurt experts. Simplified interfaces strip away the context that experienced users rely on to work efficiently. The overload principle applies to consumer-facing design where the user is deciding whether to engage. It does not apply to tools where the user has already committed and needs context to perform. Know the difference.

Be specific. The copy said “innovative solutions” and “best-in-class service” and “driven by passion.” None of it told me what the company actually does or why I should care. Vague language triggers the same suspicion as vague eye contact. If you can’t be specific about what you do, I start wondering why.

Pennebaker’s text analysis research (2011) shows that concrete, specific language correlates with perceived honesty, while abstract language correlates with perceived deception. Your visitors’ brains are running this evaluation whether you want them to or not.

Get to your point. Three scrolls deep and I still didn’t know what this business sells. The hero section is the table visit at 1:10 in a restaurant. You have seconds, not scrolls. Lindgaard’s research (2006) on rapid aesthetic judgments found that users form reliable impressions of websites within 50 milliseconds. If the first viewport doesn’t communicate what you are and who you’re for, everything below it is a conversation with someone who already left.

Signal-to-noise. This is essentially contrast between elements. Half the elements on this page were decorative filler. Stock photography that could be on any site in any industry. Animated flourishes that moved without purpose.

When everything is visually loud, nothing stands out. When nothing stands out, the brain has no hierarchy to follow and falls back on scanning, which is slow and effortful. Every element that doesn’t carry meaning is a processing fluency tax on every visitor.

Reber, Schwarz, and Winkielman (2004) demonstrated that processing fluency, the ease with which information is processed, directly affects preference judgments. Harder to process means less trustworthy, less likeable, less credible. Visual noise isn’t neutral. It actively degrades the experience.

Expectation mismatch. The trickiest violation. Sometimes the design isn’t breaking any obvious rule. The cognitive load is reasonable, the copy is specific enough, the structure is clear. But the feeling doesn’t match the brand’s positioning. A luxury brand that feels generic. A startup that feels corporate. A personal service that feels automated.

That gap between what the brand says and what the design signals is a processing fluency violation at the identity level. The verbal system promises one thing and the perceptual system delivers another. The visitor can’t name the conflict, but they feel it.

Four of those five violations were present in a single viewport. The feeling was right. It just needed names.

Over fifteen years, I’ve noticed the same violations show up again and again. Different industries, different budgets, different teams. The surface symptoms change. The broken rules don’t. That’s what makes unpacking a skill rather than a checklist: the ability to trace a feeling back to the principle it’s violating, even when the surface looks nothing like the last time you saw that principle break.


Step 3: Diagnose

Unpacking tells me which rules are broken. Diagnosis tells me which layer is failing.

This is where data enters the picture. Steps 1 and 2 are perceptual and analytical. Step 3 is forensic. I’m mapping the broken rules I identified in Step 2 to specific layers of the framework and cross-referencing against behavioral evidence.

The distinction matters because different layers produce similar symptoms. Low conversion could be a Layer 1 failure (people bounce before they see anything) or a Layer 4 failure (people engage but the path to action is broken). The treatment is completely different. Prescribing without diagnosing the layer is like treating a headache without checking whether it’s dehydration or a brain tumor. The symptom is the same. The cause determines the intervention.

I pulled up that client’s analytics. High bounce rate, low time on site. Layer 1: the first impression was doing the damage before anything else got a chance to work. But visitors who made it past the homepage showed decent engagement with low conversion. Layer 4: the trail was broken too. Two layers failing, and the dependency stack told me which one to address first.

Over years of doing this, I’ve built a pattern library that maps data signatures to layers.

High bounce rate, low time on site: the first impression is failing. Visitors are landing, getting a gut response (Step 1 at user scale), and leaving before they process anything else. That’s Layer 1, the 50-millisecond verdict. The page looks wrong, feels wrong, or fails to communicate what it is before the visitor’s autopilot carries them away.

High time on site, low conversion: the visitor is engaged but not acting. They’re reading, scrolling, maybe exploring. But something isn’t compelling them to take the next step. That’s usually Layer 4, decision architecture. The trail is broken. There’s no clear path from interest to action, or the path exists but doesn’t feel natural, so the visitor stalls.

High add-to-cart, low checkout completion: might be L0 (cognitive load in the checkout flow itself) or Layer 3 (price perception, where the perceived value doesn’t match the number at the payment screen).

Strong desktop, poor mobile: Layer 2, processing fluency. The visual system works at one scale but breaks at another. Spacing that breathes on desktop collapses into a wall on mobile. Typography that anchors hierarchy on a wide screen loses its rhythm on a narrow one.

What users say they want doesn’t match what they do. The classic Layer 3 signal, perception bias. Users tell you in surveys they want more features, more information, more options. Their behavior shows them converting on the simplest, most focused version of the page. Nisbett and Wilson (1977) demonstrated this gap decades ago: people regularly misidentify the causes of their own behavior. The gap between stated preference and revealed preference is where perception bias lives, and recognizing it is one of the most important skills a designer can develop.

This diagnostic step is where the physician analogy becomes most precise. A good diagnostician doesn’t just recognize symptoms. They map symptoms to systems. They understand which organs are upstream and downstream of each other. They know that treating a downstream symptom while the upstream cause persists is futile. The PFD layer stack functions the same way: each layer depends on the ones below it. Diagnosis tells me where in the stack the failure lives, so the prescription targets the right level.


Step 4: Prescribe

One rule governs every prescription: fix the lowest failing layer first.

This is the dependency stack from the framework, applied as a treatment protocol. The five layers aren’t independent. They build on each other. Layer 1 depends on L0. Layer 2 depends on Layer 1. And so on. If L0 is broken, if the user can’t figure out how to navigate, then fixing Layer 4 (the decision trail) is pointless. The user never gets there.

Fixing decision architecture when the first impression is broken is like optimizing a trail that nobody enters. You can build the most elegant conversion path in the world, but if the visitor bounced in 50 milliseconds because the site looked like a scam, the path is invisible.

That homepage had Layer 1 and Layer 4 both failing. Layer 1 is lower, so that’s where I started. The stock carousel became a single hero image with one clear sentence: what the company does and who it’s for. Eleven nav items became five. The chatbot popup got killed entirely. Those changes alone moved the bounce rate, because the first impression stopped triggering “being shouted at” and started triggering “I can see what this is.”

Then I moved up to Layer 4 and restructured the page so the path from “what is this” to “I want this” felt like walking, not searching.

This sounds obvious. It’s also where most design processes go wrong. Teams want to fix the exciting problem: the conversion funnel, the checkout flow, the pricing page. Those are Layer 4 problems, and they’re genuinely important. But if the data shows the real drop-off happening at the homepage, at the hero, at the first impression, then the exciting problem isn’t the actual problem. The actual problem is less glamorous: typography that doesn’t signal professionalism, stock photography that feels generic, a value proposition buried below three screens of scroll.

Tversky and Kahneman (1974) identified this as the availability heuristic: people overweight problems that are mentally accessible (the pricing page everyone’s been arguing about) and underweight problems that are harder to articulate (the vague feeling that the homepage “doesn’t feel right”). The dependency stack is the antidote. It replaces subjective priority with structural priority. Which layer is it? Fix the lower one first. Not because it’s more important in some abstract sense, but because downstream layers literally cannot function until the upstream ones work.


The Last 10%

One pattern I see constantly, across industries and budgets. The difference between a C-minus design and an A-plus design is usually typography, spacing, and styling. Small systematic tweaks that take something from 90% done to fully resolved.

Clients often think they need a redesign when they actually need refinement.

Those last 5 to 10 percent of changes are disproportionately impactful because processing fluency compounds. One slightly wrong font weight is barely noticeable. Twenty slightly wrong details create a cumulative feeling of “something’s off.” Fix the twenty small things and the whole page snaps into focus.

Alter and Oppenheimer (2009) demonstrated this compounding effect experimentally. They found that even minor fluency manipulations, changing a font from slightly hard to read to slightly easy to read, shifted judgments of truth, confidence, and trust. The effect isn’t about any single element. It’s about the accumulated ease or difficulty of processing the whole. Twenty small friction points don’t add up linearly. They multiply, because each one makes the next one feel slightly worse.

This is why the diagnostic sequence matters even for “small” fixes. A designer who skips the feeling step and jumps straight to “the font weight is wrong” will fix the font weight. But they’ll miss the nineteen other small things that are compounding with it, because they never felt the cumulative weight. The feeling told me “something’s off.” The unpacking told me it’s twenty small things, not one big thing. The diagnosis told me they’re all Layer 2. The prescription: systematic refinement, not redesign.


Where I Learned This

I first noticed this sequence in action at the door of a nightclub in Santa Barbara, years before I had a name for it.

When someone walked up, I felt something before I could articulate it. Something about their energy, their posture, the way they approached the line. I unpacked why: what specifically was triggering that read? Eyes glassy, speech slurred, posture unstable. I diagnosed: too intoxicated, not “bad person.” Then I prescribed: slow them down with conversation, redirect to water, reassess in ten minutes. Or turn them away cleanly, before the situation escalated.

Feel, Unpack, Diagnose, Prescribe. That order. Every time.

When I skipped steps, things went wrong. If I jumped from feeling to prescribing (gut says trouble, turn them away), I’d sometimes refuse someone who was just nervous. If I skipped feeling entirely and went straight to diagnosis (checking ID mechanically without reading the person), I’d miss the guy with a valid ID who was clearly about to start a fight. The steps existed because each one caught things the others missed.

Bouncing taught me something that took years to fully understand. The best diagnostic decisions aren’t purely intuitive and they aren’t purely analytical. They oscillate between both. Kahneman (2011) described this as the interplay between System 1 and System 2, where intuition generates candidates and analysis evaluates them. Klein and Kahneman eventually co-authored a paper (2009) reconciling their frameworks, agreeing that expert intuition is reliable when two conditions are met: a regular environment with valid cues, and sufficient practice to learn those cues.

A nightclub door at 1 AM on a Saturday is a regular environment with valid cues. So is a website. The patterns repeat. The cues are learnable. The intuition, once trained, is fast and accurate. But it needs the analytical check. It needs the unpacking and the diagnosis. Without them, intuition is just guessing with confidence.


The Whole Sequence

Feel, Unpack, Diagnose, Prescribe. Four steps, always in order.

That client’s “something’s off” became a page that converts. Not because I had a magic fix. Because I felt it first, unpacked why, diagnosed which layers were breaking, and prescribed in dependency order. The feeling led to the rules. The rules led to the layers. The layers led to the fix.

The alternative to this sequence is every shortcut I’ve watched fail in fifteen years. Jumping to solutions before feeling the problem. Copying competitors instead of unpacking the real violation. Treating the exciting symptom instead of the structural cause. The steps exist because every way of skipping them has a name, and I’ve seen each one cost someone real money.

The sequence is the methodology. The layers give you the map. And the discipline of not proposing solutions until you understand the problem is the hardest skill in design, because every instinct tells you to jump to the fix.

Fifteen years in, I still have to force myself to slow down. To feel first. To let the feeling teach me something before I decide what it means.

Next: Music, Humor, Stories, on the three tools that change how people feel about anything, ranked by power.

Key Terms

Feel, Unpack, Diagnose, PrescribeThe four-step diagnostic sequence. Feel is pre-verbal (System 1). Unpack is analytical (System 2). Diagnose is forensic (mapping to layers). Prescribe is architectural (fixing in dependency order). Sequence is load-bearing.
Doorway diagnosisMedical concept: an experienced physician forms a working hypothesis before asking a single question. Pattern recognition running on thousands of prior cases. Design diagnosis works identically.
Recognition-Primed DecisionKlein (1998). How experts actually decide in high-stakes environments: recognize the situation type, mentally simulate the first plausible response, act. Recognition first, deliberation as a check.
Emotional primacyZajonc (1980), LeDoux (1996). Affective responses arrive before cognitive processing. The amygdala’s low road responds in ~12ms; language comprehension operates at 200–500ms. The feeling is pre-linguistic. It arrives before you can name it.
Somatic markersDamasio (1994). The body tags situations with feelings based on prior experience. These markers function as rapid heuristics that narrow the decision space before conscious analysis. Feeling is not the opposite of reasoning. It is reasoning’s fast lane.
Premature closureCroskerry (2009). Locking onto the first measurable finding instead of the actual root cause. The diagnostic error that occurs when you skip from symptom to treatment without the intermediate steps.
Viewport countLayer 0 test: count everything competing for attention in a single viewport. Working memory holds roughly four chunks (Cowan, 2001). If a screen demands twelve, L0 is failing.
Dependency stackThe treatment protocol: fix the lowest failing layer first. Downstream layers cannot function until upstream ones work. Structural priority replaces subjective priority.

References

Zajonc (1980)Feeling and thinking: Preferences need no inferences. American Psychologist, 35(2), 151–175.
Kutas & Hillyard (1980)Reading senseless sentences: Brain potentials reflect semantic incongruity. Science, 207(4427), 203–205.
Lakoff & Johnson (1980)Metaphors We Live By. University of Chicago Press.
LeDoux (1996)The Emotional Brain: The Mysterious Underpinnings of Emotional Life. Simon & Schuster.
Ericsson, Krampe & Tesch-Römer (1993)The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363–406.
Klein (1998)Sources of Power: How People Make Decisions. MIT Press.
Damasio (1994)Descartes’ Error: Emotion, Reason, and the Human Brain. Putnam.
Croskerry (2009)A universal model of diagnostic reasoning. Academic Medicine, 84(8), 1022–1028.
Cowan (2001)The magical number 4 in short-term memory. Behavioral and Brain Sciences, 24(1), 87–114.
Hick (1952)On the rate of gain of information. Quarterly Journal of Experimental Psychology, 4(1), 11–26.
Pennebaker (2011)The Secret Life of Pronouns. Bloomsbury Press.
Lindgaard et al. (2006)Attention web designers: you have 50 milliseconds to make a good first impression! Behaviour & Information Technology, 25(2), 115–126.
Williams & Bargh (2008)Experiencing physical warmth promotes interpersonal warmth. Science, 322(5901), 606–608.
Reber, Schwarz & Winkielman (2004)Processing fluency and aesthetic pleasure. Personality and Social Psychology Review, 8(4), 364–382.
Nisbett & Wilson (1977)Telling more than we can know: verbal reports on mental processes. Psychological Review, 84(3), 231–259.
Tversky & Kahneman (1974)Judgment under uncertainty: heuristics and biases. Science, 185(4157), 1124–1131.
Alter & Oppenheimer (2009)Uniting the tribes of fluency to form a metacognitive nation. Personality and Social Psychology Review, 13(3), 219–235.
Ackerman, Nocera & Bargh (2010)Incidental haptic sensations influence social judgments and decisions. Science, 328(5986), 1712–1714.
Kahneman (2011)Thinking, Fast and Slow. Farrar, Straus and Giroux.
Klein & Kahneman (2009)Conditions for intuitive expertise: a failure to disagree. American Psychologist, 64(6), 515–526.
Kalyuga et al. (2003)The expertise reversal effect. Educational Psychologist, 38(1), 23–31.
A note on how this was written: This series is AI-assisted. I provide the stories, the methodology, the case studies, and the editorial direction. AI helps me structure and draft. This is consistent with Perception-First Design’s own transparency principle: if I’m writing about perception, I should be honest about how the writing itself is produced.