🔁

[Archive] Universal Learning Machines Whitepaper

I want to discuss the problem of building so-called general intelligence because it is a necessary step towards Von Neumann machines.

Our framing for how we define AGI and our attempts to produce is not a useful definition that can be discussed in logical and mathematical terms. Therefore it cannot be theorized how to build it.

In this paper I propose an abstract description of general intelligence that is relatively unambiguous, which I’ll call universal learning instead, and given this redefinition I then describe how these machines could work (the thesis), and end with ideas to build these general learning machines (the experiment).

One line def; a robotic general intelligence is one that can use its current sensors and actuators to make novel sensors and actuators thereby giving itself access to latent spaces to build models for and so extend its overarching world model. How does one make new sensors? New actuators? Why would an intelligence make new sensors? It has a problem, a bad prediction with its existing model of the world, it finds something that its model does not generalize too. Now, the definition is actually simpler. A general intelligence is on that can identify when its predictions are wrong because of a fundamental error in its model and self correct through efficient trial and error. Under this definition evolution against reality is a form of general intelligence. We can see that we can train intelligence given input and output examples and that models can self teach. However what models dont have a process for is detecting when their model is fundamentally flawed and this because they live in the examples that we give them and so are biased on their training data. Because they have no meta agency to change how they are trained or to interact with the world they have no vehicles to edit nor detect these fundamentals.

Intelligence seems to clearly exist in the interpretation of the data and this higher level process that takes place between sensing and acting however, while possible, it’s inefficient to decouple them.

At the end of the day the evolutionary process is intelligent as a broad process. The difference between it and mammals is one of speed of model transmission. Humans can learn without dna as can other animals. So our iteration speed is higher. Along the way evolution patched in various optimizations for animals giving rise to smarter creatures culminating so far in humans but almost certainly not ending with it.

In order to patch in new senses and actuators you need to have an awareness of what you are so that you can change it. This is what we call consciousness and sense of self. There is nothing special about it and nearly all animals have this awareness its simply a matter of degree.

So, lets create a practical and solvable definition pf general intelligence which I will call robotic general intelligence to imply a aense pf action and movement in the real world.

To know you have created this you can perform a simple test where we provide a preexisting model/base that can operate within a simple environment however this preexisting environment should be known incorrect and the agent should be able to correct its own model on this.

An intuitive example. A robot is placed in an environment with balls and is trained to place the balls in one of two bins. Unknown to the robot it cannot see in color and bin one is for red balls while bin two is for white balls. For the purposes of this the image of the balls should be the same in its current vision. The intelligence should be able to construct a way to determine how the balls are different. Now, this is actually an extremely challenging test and the odds of the smartest human on earth accomplishing this is very low. This is analogous to humans creating a geiger counter. Is there something more basic we could start with to tackle these kinds of discrete changes? Even single humans dont really have general intelligence under this definition they are just able to learn models generally and attach them to themselves through learning from other humans and then the occasional confluence of humans come together to create and distribute some new model by inventing a new sense or actuator to explore and test it.

Ah, so instead what we should focus on is the abolity for one intelligent agent to transfer a model to another and form a swarm general intelligence.

But how do we create the set of models that can be transferred in the first place? We’ve come full circle!

The problem with our robot is it is impossible to distinguish the balls without a new sense and a meta process for incorporating that new sense. But inventing something new is tier 1 general intelligence while tier 2 is simply downloading models from the inventors. In this view humans can stay as the tier 1 intelligence and robots can be the tier 2. Our problem then reduces to how to enable a regular and cheap transfer of these models.

Lets update our test. A human can walk inside with a new camera and give it to the robot. The robot gets a new dimension to its visual sensor input — color. Now simply given this new sense it must edit itself to place the red and the blue balls into the right bins. Ideally a human could demonstrate the placement to speed this along when desired by the tier 1 intelligence.

This would be a superior test for intelligence and you can consider a wide variety of permutations of this with other senses — the same robot should be able to accomplish those analgois tasks through self editing. Then we can say we have made a form of general intelligence.

it’s interesting to note our definition imples and predicts different levels of general intelligence within groups of humans based on the derived senses and actuators they have been exposed to and the models they have included. This will give them a far broader suite of pattern recognizers to match against and let them templatize a far larger latent space for further transfer learning. Fundamentally and mechanically the meta process of learning should be relatively constant if mechanically constrained by short term capacity and prior learning.. Indeed this tends to hold up where learners learn in the same way, there is no specialness to modalities effectiveness differing barring disabilities. To sum this up the smart get smarter because it becomes easier to learn new concepts because you have more models to use as templates for the transfer of information. Learning then could be expected to in general be accelerated and indeed you see this where pnce the fundamentals are mastered everything downstream tends to have a faster learning rate.

The Problem

Intelligence is the ability to interpret inputs and produce outputs that alter the environment towards whatever evaluation method you have for good/bad.

General intelligence is the ability to construct new inputs (new sensors like an electron microscope) and novel outputs (tools) that alter the environment towards whatever evaluation method you have for good/bad. This capability, and the speed we are able to do it, is what separates human intelligence from (most) other animals.

I call this capability to install indirect sensors and new actuators Universal Learning because we can arbitrarily generalize our model of the world and the action space which translates to an ever growing environment manipulation space.

A Universal Learning Machine then can, without human intervention, learn to perform, or build derivative machines (tools) that can perform, any action set on animal or better timescales. For example, if given the directive to survive indefinitely, we should expect it to become a space faring entity. As a near-term benchmark it should be able to perform any action set existing life or machine built by existing life can do. It should be able to build new tools, build new senses, and use these progressive enhancements to do things like building a skyscraper, spaceship, creating weapons, creating medicine, folding laundry, learning language, and checking your math homework. In this way, it is a universal learning machine — able to grow it’s own capability progressively as needed for it’s goal (target environment).

Contrast this with:

  • A machine interprets sensor data and drives actuators according to a ruleset.
  • A modern AI model learns a ruleset from data during training and then is (usually) statically deployed. Online models do exist especially in reinforcement learning but they are generally monolithic systems. The problem with our models today is they tend to fail at few shot learning, transfer learning, and they do not exhibit in-context learning transfer to long term learning and form there efficient generalization.

A learning machine interprets sensor data and drives actuators to achieve target environment states and minimizes the error between the target environment (Te) and the perceived environment (Pe) over t from 0 → ♾️.

This may sound similar to many so-called autonomous systems. The difference is the meta ability implied in “environment” of self-extending with new sensors and actuators. This is the core of general learning (process) that leads to general intelligence (capability).

In the remainder of this paper I will describe in greater detail how such a meta machine could be constructed.

The Thesis

  1. detects when sensor encodings mismatched actual through back propagation and
  2. detects when the predicted actuator path failed to achieve the target environment state then it retrains itself when it has collected batch of these mis-predictions to minimize overall loss. This is the first tier of a learning machine, higher tiers of learning machine could extend their own actuators and senses to minimize overall loss for driving towards the target environment — I refer to this as general intelligence since in theory it can continuously improve infinitely.
image

Where

  • t is the time
  • St is the spatial frame that is considered at a given time step t
  • S is a set of sensors at t
  • A is a set of actuators at t
  • E is the environment, including the learning machines body, and any other objects in the spatial frame

Under this definition it is clear that static models cannot lead to general learning because they cannot self-error correct.

  • a set of sense encoders that create a respective tensor in a given frame
  • an encoder that collapses the sense encodings into a unified model of the environment
  • a decoder which consumes the model of the environment, the available action set, and predicts a set of likely outcomes
  • selects a likely outcome i.e. prediction of the environment state according to a policy or policy set

These operating dimensions are directly established by the senses and actuators that the actor has evolved with because this has established the “container” of all their cognition. Contrary to popular believe we do not see humans as a universally general intelligence because our reference frame, our “inner world”, our internal “physics engine” does not encompass all levels of existence, instead it encompasses a middle world of existence. We are able to, with great effort, contort our processing to learn about these other levels of existence but clearly we are not suited to it and it is likely that we understand far less than we think we do.

That aside is we are not a general intelligence because we generally do not understand everything and all planes of existence and it is purely theoretical that we even have the capacity to.

Instead, we are a form of bounded general intelligence. We can generally build models of novel events in our environment and train ourselves, again within bounds, to change our behavior. Likewise all other life has it’s own boundaries that is lower in dimensionality than our own but generally all multi-celled animal life lives in, and is bounded by, a 3-D reference frame.

This reference frame is what I refer to as the “inner world”.

I propose that intelligence is simply the emergent ability to do the following:

  • render a target version of our inner world that corresponds to our operating environment — a root reference frame that describes the predictive heuristics for computing state changes given N object models operating within that environment
  • learn to take sequentially more effective actions that influence the actual world to move towards the target state of the environment

The first implies the following:

  • the ability to render the reference frame i.e. the inner world
  • the ability to render and model the internals of various objects that sit in that reference frame
  • the ability to compute the likely interactions between various objects

The second implies the following:

  • the ability to progress our model of the world forwards (simulation of different possibilities) or backwards (remembrance or simulation of alternatives for retraining)
  • process the difference between what our simulations gave us and what actually happens and work to reduce the difference
  • process what our senses tell us and update the inner world
  • use our actuators to change the environment following our projections that would achieve the target state

There is of course massive layers of complexity due to the amount of compute needed. There are three high level challenges:

  • continuously re-render what is happening as the agent takes actions in the world
  • the cost of simulating potentially infinite futures
  • finally collating and organizing our episodic experiences and back training

In life we see that all of these states are more or less mutually exclusive i.e. doing, thinking, and sleeping respectively. Of course there are numerous advantages that silicon can have here were these three activities could be interleaved more tightly than with biological systems. But let’s defer these compute constraints for now until we have decided on the reference frame we’d like to test i.e. how many dimensions and what the dimensions are.

Now, before we talk about specific strategies we should start with the simplest testable experiment for this thesis. I prefer to discuss reference frames native to us — 1,2, and 3-D in space time so that (1) it’s clear and obvious how to set goal states i.e. ghosting an overlay of the internal state how we’d like it to be and (2) to make it obvious to interpret what the model is doing and give it feedback, demonstration, and expertise on how to do it better.

The beauty of this idea is that it resolves numerous unsolved problems in AI. When considering a 3-D intelligence with this thesis it becomes clear how to:

  • alignment — setting goals and alignment of AI with the controller becomes obvious and intuitive — pick and place an environment change similar to level editing changes in a game engine
  • continuous learning — self-defining error correction simply becomes the delta between the projects likely states and what was sensed to have occurred
  • transparency — when the reference frame of the agent corresponds to our own we can easily introspect and playback it’s memories to see what it was doing, what error correction it worked on, and what goal states it was pursuing and who set those goal states, though this becomes more opaque the further into the future we travel with these goals and whether they are recursively defined
  • clarity of progression — with this thesis in mind it is readily apparent how to go about testing and building such systems, we can start by using existing simulation engines and narrow AI models for objects then use standard RL and other techniques for building the various key subsystems i.e. driving actuators, sparse encoding sensory input, executive planning i.e. temporal RL models, simulating the future is the physics engine/inner world reference frame runtime, etc.

Overview

  • here is a look at the learning machine and how it works

Universal Learning machines

Encoding World Models

Decoding Future Worlds

Taking Action

Learning

Safety

The Experiment

The Approach

I would propose that the first thesis test start with a reference frame that is cheap enough to compute that we do not need to do difficult optimizations and stack other secondary problems onto.

Using easy to visualize spaces over abstract spaces is preferred as it’s easy for us to understand and to direct the models.

You can think about this roughly as tracing the evolution of intelligence. Starting from single cells that operate by navigating chemical gradients all the way through 3-d navigation in complex environments. It’s not perfect as an analogy since technically the world is 3-D for all but still we can go from 1-D to 3-D then consider hyperdimensional spaces when we’ve reached sub-human or around human intelligence. However, the complexity of the reference frame is not the only consideration, the complexity of the agent also matters a great deal i.e. what are it’s senses, what is it’s thinking power, and how complex are the other objects it interacts with.

On the simplest side 1-D is the simplest reference frame, with single sense, and a single actuator actor with no other actors. As an aside we will assume time as an implicit dimensions here and until 3-D we can consider that a reference to a volumetric space. Beyond that we consider other dimensions in our reality and take the position that human reality is well beyond 3-D and is actually hyper dimensional for example heat and radiation are dimensions and humans have other dimensionality of operation due to our social nature.

Still, from this we can see the following key knobs of complexity roughly from more impactful to less impactful:

  • the complexity of the reference frame from 1 to N-D where N could be a hyper dimensional abstract space or simply the 3-D world plus other dimensions like heat etc.
  • the complexity of the objects in the reference frame interacts with, especially social vs. non-social i.e. game theory (this is an important consideration and with this model we are advantaged with prediction of other agents by simply projecting our internal model with tweaks as the model of the other agent, a multi-self view of social intelligence)
  • the span of the temporal frames under consideration (both forwards and backwards)
  • the complexity of the agent’s sensory input impacts the dimensions it uses to build it’s internal model of the world, for humans there is at least a dozen
  • the complexity of the actuators available to the actor
  • the fidelity of representation of the reference frame i.e. sparse vs. dense encoding
  • alignment fidelity reflects the tooling to describe a set state

It’s worth noting that for simplicity we can compress the refrence frame and bound it by the sensory model implied by the agent (which will obviously build bad models of the world — hello our flawed ideas about derived dimensions!). But it’s more representative to include derived “senses” i.e. dimensions that are perceived through the combination of other “sense” processing. In the “minds” of our intelligence we can consider these “virtual” or “derived” dimensions of it’s refrence frame and due to that they may not update in real time but lag. This also tells us something profound about human intelligence and how it works through a more narrow definition of general intelligence — if we have any form of general intelligence it is simply the ability to derive hyperdimensionality as a “sense” through “constructed” senses i.e. measurement tools, logic, etc. This makes it apparent why these “new” so-called “abstract” modes of thinking are so time intsensive to understand — we have to build a “sense” from scratch and incorporate that sense into our reference frame of the world, usually through analogy.

Roadmap

Now, to define a potential roadmap we can build a matrix of varying degrees of complexity amoung these knobs starting with the least complexity. One item worth learning is where does the complexity scale most? Changes to reference frame and temporal frame seem most likely — they put an upper bound on the environment.

Dimension of Complexity
1-D Lonely Amoeba
Description
Amoeba responds to chemical gradients in 1-D
World
Actual World
Virtual Line with UV gradients
Objects
None
Social
None
Body
Senses (1 to N)
1 UV sensor
Actuators (1 to N)
Abstract Tail
Learning
Offline to Online
Offline
Safety
Sensor Recordings
No
Camera Recording
Yes

Experiments

What does a shallow pass of the Inner World Thesis tell us about it’s feasibility?

What if you didn’t know the difference between the inner world and the real world? Could planning be as simple as navigating the inner world into the future?