🟰

Solvable AGI

As i mention in The Goal is Machine Consciousness I find the target of AGI in-solvable because the definitions are vague.

Here is a definition we prefer: the ability to generate and integrate novel sensors and actuators to incrementally drive optimization of a machines behavior policy function.

That’s it. We call these machines Universal Learning Machines to differentiate them from so-called AGI.

More dimensions of measurement allows you to create richer models of the environment. Richer models combined with new actuators let you expand your ability to perturb the environment. Together you are able to use these features to learn a more accurate simulation of the environment.

A better simulation of the environment let’s you increase your prediction accuracy. Increased prediction accuracy and a broader possibility space will let you reach higher performance over time towards whatever your behavior policy is.

We can work backwards from this definition and ignore the generation of sensors and actuators to focus on on-the-fly integration of sensors/actuators (think of it like tool usage).