active draft. a technical sketch. general, before special —alignment, before distraction

The field of artificial intelligence aspires to build artificial generally intelligent systems.

the field of artificial intelligence cannot achieve this objective without a fundamentally new approach

 


problems

# name note
1. ai objective definition problem insufficient technical specification to directly build (engineer, materially construct) a generally intelligent system
2. ai problem definition problem cannot technically define (design, architect) a generally intelligent system (the objective problem, which must be solved)
3. ai general intelligence problem insufficient understanding/ technical specification
4. ai human intelligence problem insufficient understanding/ technical specification
5. ai consciousness problem insufficient understanding/ technical specification
6. ai cognitive mechanics problem insufficient understanding/ technical specification
7. ai paradigm problem no coherent paradigm to transform open problem-space to closed puzzle-space. no way to formally reconcile efforts with objective
8. ai problem solution fit problem present general-ish ai products do not satisfy problem solution fit, yet are being iterated on by methods of product market fit
9. ai current approach problem the current approach iterates within the wrong phenomenal scope (information, or knowledge, instead of system of intelligence)
10. ai evolved architecture problem no technical specification concordant with evolutionary principles
11. ai coherence problem output coherence follows explicable, traceable, debug-able, fixable, systemic coherence
12. ai alignment problem understanding the alignment problem frames the objective definition problem, and ~all others
13. ai language problem language is not knowledge; the map is not the territory, the word is not the thing: the word isn’t even the map
14. ai identity problem present approaches to ai are a moon-shot ventures masquerading as science or engineering (~all directly related science and engineering is directed at approach, rather than the actual, albeit undefined, problem space)
15. ai problem ownership problem who tf is driving? who ought to be?
16. ai service outage problem every failed fundamentally unsatisfactory action, ought to be seen as service outage. and reliability concern.
17. ai venture problem venture 101: problem solution fit precedes product market fit; problem solution fit requires a defined problem
18. ai best case problem the best case appears to be, that the current approach solves problems which have not been defined, in such a way that we can reverse engineer both the solution and the question, without first suffering problems causes by undefined, unknown, circumstantially uncontrollable ‘magic’ doing substantial or irrevocable harm
19. ai maintenance problem cannot debug and maintain an opaque system
20. ai prompt-engineer problem prompt engineering is direct example of the misdirected attentions of present approaches. future agi will be our prompt engineer, our contextual circumstantial translator, for —everything—: services, people, media (fiction and non), etc. the simplest way to define intelligence, is by listing what humans still have to do to build and use present day ai systems. beyond a point, particulars of the current implementation are pure distraction
21. ai liability problem part of the method and rigour of engineering, relates to the consequences, and ownership of the liability, of maligned or failed architectures (software, hardware, buildings, products, food, etc) at no point is “we don’t know how it works, nor how it will behave in normal operational circumstances” acceptable.

 


summary

The field of artificial intelligence, does not appear to know:

  • How to define the problem it is attempting to solve ^[ General intelligence, dependant on human intelligence, dependent on consciousness ]
  • How to define the proposed solution to its undefined problem ^[ No coherent approach to directly solving the undefined problem ]
  • How the proposed solution (which it thinks might solve the undefined problem –for it), actually works ^[ No mechanical, operational schematic ]
  • How to direct the proposed solution to understand precisely what is needed, and comply reliably ^[ Insufficient integration, and interface ]
  • How to direct the proposed solution to understand precisely what isn’t needed, and comply reliably ^[ Insufficiently integrated and operable safety apparatus ]
  • How to know, in advance, all possible internal system states of the proposed solution as applies to each circumstance or situation, to determine operational or circumstantial suitability, and safety ^[ You know, problem - solution fit, health and safety, etc ]
  • That the proposed solution, (which the field thinks might solve the undefined problem –for it), can’t ^[ The field of artificial intelligence is iterating within the wrong phenomenal scope ]

 


#tbc