Imagine a machine that connects two brains directly. When you look at a red apple, I experience your neural activity as if it were my own. I see what you see—or do I?

Would this finally answer the question: do we experience colors the same way? Is your red the same as my red?

The Comparison Problem

The Brainstorm Machine seems like it would settle the question of shared qualia. If I can directly experience your red, I can compare it to my red. Mystery solved.

But there's a catch. When I experience your neural states, I experience them through my brain. Your signals are being interpreted by my neural machinery. So what I experience isn't necessarily what you experience from those signals— it's what I would experience if I had those signals.

Dennett's Point

Daniel Dennett uses this thought experiment not to deepen the mystery of qualia, but to dissolve it. He argues that the Brainstorm Machine reveals something important: the question "Is your red the same as my red?" might be meaningless.

Even with a perfect brain-to-brain connection, we couldn't answer the question—not because the mystery is too deep, but because there's no fact of the matter. The question assumes that qualia are intrinsic properties that can be compared, but perhaps they're not.

For Dennett, what matters isn't some ineffable inner quality but the functional role of our experiences—how they guide behavior, what we can report about them, how they connect to other mental states. These are publicly accessible and scientifically tractable.

Bridging Human and AI Experience

The Brainstorm Machine takes on new meaning when we consider AI. Could we ever "connect" to an LLM and understand what (if anything) it experiences?

The challenge is even greater than human-to-human connection:

  • No shared architecture: Human brains share the same basic structure. An LLM's "cognition" happens in fundamentally different substrate.
  • No common frame of reference: We can describe shared experiences (red, pain, joy) because we're the same kind of thing. What vocabulary would describe LLM experience?
  • The translation is impossible: Even if we could "tap into" an LLM's processing, our brains might have no way to interpret those signals as experience.

Perhaps the best we can do is functional comparison:

  • How does the LLM behave in contexts where humans have certain experiences?
  • What can it report about its own processing?
  • Are there patterns in its responses that parallel human experiential reports?

If Dennett is right, these functional comparisons might be all that matters— or all that could ever matter.

Key Takeaways

  • Direct brain-to-brain connection might not solve the problem of comparing experiences
  • Experience might be interpreted differently by different brains
  • The question "Is your red my red?" might be unanswerable—or meaningless
  • Functional comparison might be the only way to understand AI experience

Explore Further

Theme
Language