This is the first of a series of articles that will present the developmental stages of Aggie (an implementation of the minimal artificial general intelligence described in Artificial General Intelligence) in pseudocode/algorithm form. The algorithm should be sufficient for a programmer to implement Aggie in their preferred programming language and to follow or start their own fork of development. Working Python code will exist before each article is posted. Collaboration, comments, criticism and general yay or nay saying are all welcome and invited.

Aggie1 is the simplest possible AGI and exists in the simplest possible world, but there is a clear and conceptually simple path of development for Aggie, and its abilities will increase in number and complexity to deal with correspondingly larger and more complex worlds.

To create an evolutionary perspective, imagine Aggie 1 as the first prokaryote, and genuine, complete AGI as Homo sapiens sapiens. Or, on the grandest scale possible, Aggie 1 is like the instant after that seminal pool break called the big bang and AGI is the universe of right now. In summary, 'This is the start. There is a long, long, long way to go.'  By the time Aggie 9001 arrives in our own real world, she will be capable of wresting control of the pod bay doors from HAL, scolding his bad behavior, and schooling him on being better at what he does.

tl;dr for the bloated description in Artificial General Intelligence:

The world is composed of states and events, which cluster into properties and processes, which cluster into objects. Processes relate properties to other properties (and to objects, which boil down to properties at any particular 'now').

Don't want to slog through the algorithm presented at the end?

Read this plain description instead (or before).

Properties:

The minimal world has only two properties, time and location, which have associated processes that can change their states. The time property is unique: it has only one state ('now'), which disappears and is replaced by a new 'now' with each tick of the world clock. Past 'nows' are gone and cannot be accessed from the current 'now'. Intelligences (like Aggie 1) can record certain world states in the current now in a memory of some sort. The location property has only two states ('location 1' and 'location 2'). There is only one object in the world, and that is Aggie1. (If this is gibberish to you, reading Artificial General Intelligence might help. Or not.

Processes:

The world time process simply issues 'ticks' to all processes in the world. Like the escapement of a clock, a tick is a quantized unit of continuous time that provides a basis for measuring change, thus defining states and events. The tick provides a signal that tells each object (Aggie 1 in this case) the current state of the world properties. Aggie 1 detects the world state (location), decides if it is happy, stores the state as the final state of one experience and the initial state of a new experience, decides on an action (move or not move) to seek or maintain happiness based on the current state and past experiences, and returns it action to the world time process, which ticks  world processes to update the world state. That keeps happening over and over again until the end of time.

To be continued: (Aggie 2)

This minimal agent in a minimal world is trivial, but the architecture is complete and this demonstrates that it works and how. From here, Aggie will face increasingly large and complex worlds, and become increasingly complex in design to meet the challenge. Specifically, that will involve shifting from a closed, static, deterministic world to an open, dynamic, probabilistic world, adding properties and processes to the world, and adding properties (including needs, sensors,effectors and other internal states) to Aggie. Aggie's Learn process and knowledge state will also need upgrading to enable more cognitive functions, such as generalization, dynamic prioritizing, hypothesizing, planning and whatever other capabilities serve to achieve satisfy her needs more effectively and efficiently in whatever particular world she inhabits. Yes, there is a long way to go, but I believe this path will get us there.

Will Aggie meet her challenges? Tune in to the next episode, 'Aggie 2'.

 


Aggie 0.1 algorithm

define class World:
	time = True
	location = (loc1 = "", loc2 = "Aggie1")   // Aggie is at location 2 in the world
	clock(location):
		// Call the clock process of the Aggie object with location as argument to get the object's reaction
 		object_move = Aggie1.clock(location) 
		// Call the world move process with Aggie's action to set the world state if needed
		location = move(object_move):		
			if the object moved, toggle the location state

define class Aggie1(instance_name):
	name = instance_name
	need = False
	move = False
	detector = False
	effector = False
	preferred_state =  (loc1 = "Aggie1", loc2 = "")

	define class Learn(instance_name):	
		// This is implemented as a class to encapsulate the knowledge store and the processes that affect it
		// and because the two learning process are called at different times 
		name = instance_name
		knowledge = ()
		// An experience is formed in two stages across two ticks. When a pending experience
		// is completed in the second tick, it is added to the knowledge store. 
		pending_experience = {'need':None,'move':None,'need':None}	// Blank experience

		complete_pending_experience(need_state):
			pending_exeperience2 = need_state	// Set the final state of the experence to the need state
			If the completed experience is not in the knowledge, add it
			create a new pending experience by setting the initial state to the need state
		get_from_experience():
			
			
	clock(location):
		// Call Aggie's own processes in order
		detector = detect(location):
			// Decide if Aggie1 is in the preferred location
			if location is the preferred state, detector = True
			else, detector = False
			return detector
		myLearn.complete_pending_experience(need)
		move = motivate()
		act(move)
		return move
	motivate():
		// Search knowledge for an existing experience that has a good outcome
		move = myLearn.get_from_experience( )
		if one is returned:
			extract the action
		else:
			choose to move or not randomly
		set the action in the pending experience
		set the effector to the action value
	act(move):
		effector = move
					
// Main loop
time = True					 // Let there be time
while time is True:
	time = World.clock(location)       // Tick the world clock process once
	if time is not True, end    		// Exit condition: the end of time, bye-bye world

The output from running my implementation in Python shows (quite unsurprisingly) that Aggie 1 learns to find her happy place and stay there within several ticks:

Tick 1

Added new experience () 

Using experience ()

Tick 2

Added new experience ({'detector': False, 'need': True}, {'move': False}, {'detector': False, 'need': True})

Using experience ()

Tick 3

Already experienced this.

Using experience 

Tick 4

Already experienced this.

Using experience

World state changed to Aggie 1,''

Tick 5

Added new experience ({'detector': False, 'need': True}, {'move': False}, {'detector': False, 'need': True}), ({'detector': False, 'need': True}, {'move': True}, {'detector': True, 'need': False})

Using experience ()

Tick 6

Added new experience ({'detector': False, 'need': True}, {'move': False}, {'detector': False, 'need': True}), ({'detector': False, 'need': True}, {'move': True}, {'detector': True, 'need': False}), ({'detector': True, 'need': False}, {'move': False}, {'detector': True, 'need': False})

Using experience ({'detector': True, 'need': False}, {'move': False}, {'detector': True, 'need': False})

Tick 7

Already experienced this

Using experience ({'detector': True, 'need': False}, {'move': False}, {'detector': True, 'need': False})

Tick 8

Already experienced this.

Using experience ({'detector': True, 'need': False}, {'move': False}, {'detector': True, 'need': False})

Tick 9

Already experienced this.

Using experience ({'detector': True, 'need': False}, {'move': False}, {'detector': True, 'need': False})

Tick 10

Already experienced this.

Using experience ({'detector': True, 'need': False}, {'move': False}, {'detector': True, 'need': False})