active draft. a technical sketch. general, before special —alignment, before distraction

Sam Harris knows:

—Why all indirect approaches to artificial general intelligence, like LLMs, will fail to achieve human like general intelligence, and additionally, cannot achieve human like consciousness.

Because: ‘consciousness is distinct from its contents’—Sam. $$\ldots$$

The field of AI can’t define general intelligence sufficiently for engineers to directly design and build generally intelligent systems.

Instead, the field focuses on alt indirect approaches, like LLMs, in the hope that a distinct system is redundant, or simply emerges. $$\ldots$$

Sam knows better.

Consider Sam’s framing – consciousness is distinct from its contents – though rephrased:

A system is distinct from its signals

Explicitly, consider consciousness as the system; and whatever arises in consciousness, is a signal within that system. $$\ldots$$

Glimpse the problem?

—All indirect approaches, like LLMs, consider only signals isolated from any systemic context, by definition.

Consciousness, our director of contextual evaluation—is the other bit. The undefined problem. And extrinsic to, distinct from, signals. $$\ldots$$

The biggest clue that something important is missing (& what), is that for the indirect approach to work at all, actual human intervention remains critical: to curate, constrain (legal & safety), and to assess every single result, for suitability, and fitness for purpose. $$\ldots$$

LLMs are cool, and the technology will become part of future generally intelligent systems, no doubt, but only as part of a system distinct from its signals, and only after we’ve figured out the undefined problem. $$\ldots$$

So, how then do we revisit that undefined problem in modern times?

—And directly observe, explore, measure, and define, our system of general-intelligence, distinct from the distractions of any-and-all of its signals?

Sam Harris knows. $$\ldots$$

uh, full disclosure—Sam Harris may not exactly know that he knows