The Pace of Progress

How fast should AI advance?

As AI capabilities grow rapidly, there's debate about whether we're moving at the right speed.

All agree that safety matters—the debate is about how to achieve it.

Slow down

We need time for safety research to catch up. We don't fully understand these systems.

Keep building

We learn by building. Open development lets more people contribute to both capabilities and safety.

  • Can safety research keep pace with capabilities research?
  • Who decides what 'responsible' development looks like?

Understanding AI

What kind of thing is AI?

How we conceptualize AI shapes how we develop, regulate, and relate to it.

However we define it, AI is already transforming how we work and think.

Human collaboration

AI is not autonomous—it's a massive mashup of human knowledge. Calling it 'intelligence' obscures that it's made of us.

Alien intelligence

AI processes information in fundamentally different ways than humans. It's 'alien' in how it thinks.

  • Does how we frame AI change how we should govern it?
  • Can something be both a tool and an intelligence?

Ways of Knowing

How should we understand AI?

AI can be approached through many lenses: mathematical, philosophical, historical, ethical.

Understanding AI requires multiple perspectives—technical, social, and humanistic.

Through the math

To understand what AI can and cannot do, we need to understand how it actually works—attention, gradients, embeddings.

Through the humanities

AI raises ancient questions about consciousness, creativity, and what makes us human. History and philosophy have much to offer.

  • How do we integrate different ways of knowing?
  • What questions does each approach miss?

Paths to Safety

How do we make AI beneficial?

There's broad agreement that AI should be safe. But there are different theories about how to achieve this.

Safety and benefit aren't opposed to progress—they're what makes progress worthwhile.

Built-in constraints

Safety should be baked in from the start through careful training, constitutional principles, and solving alignment.

Openness & oversight

Open models can be audited by many. The best systems combine AI capabilities with human judgment.

  • Can these approaches be combined?
  • Who gets to define 'beneficial'?

Societal Impact

What will AI mean for humanity?

AI is already reshaping work, creativity, and how we relate to information.

This is a pivotal moment. Our choices now will shape AI's long-term impact.

Civilizational shift

AI has hacked the operating system of human civilization—language. This is the first technology that can make decisions and create ideas.

It's up to us

AI doesn't have agency—we do. The outcomes depend on the choices we make about how to develop and deploy these systems.

  • How do we ensure benefits are widely shared?
  • What human capabilities should we protect or cultivate?

Weighing the Risks

What should we worry about?

AI presents various risks at different scales and timelines.

All risks matter. The question is how to prioritize and address them effectively.

Existential concerns

If we create something smarter than us, we need to be sure we can control it. This is a serious engineering challenge.

Present harms

The AI mystique distracts us. Real harms are happening now: bias, exploitation of data, erosion of human agency.

  • Do short-term and long-term concerns require different approaches?
  • How do we avoid letting any important risk fall through the cracks?
Theme
Language
Support
© funclosure 2025