Wednesday, January 16, 2013

A.I. Design Notes 1

Here is a brief outline of some of my views on AGI design which I'll expand on in time.

Noise is Necessary.
Noise is an essential ingredient for any reasonably intelligent system. This is less controversial to say now than a few years ago given recent findings in several fields. Maybe you can have limited success in limited domains with a non-noisy brain, but that's about it. And the deltas that come from a rich, complex environment can provide a lot for what is needed in an agent brain. But again I suspect that the limitations will show up and become more pronounced as time goes on.
We know for example via implants that there is an immediate loss of fidelity in hearing if the implant is not a noise-sensitive circuit. As an agent becomes more advanced and the brain becomes larger, I believe this is definitely the case.

Open Architecture & Self Organization.
One take on this idea I heard about is Andy Clark's 'Natural Born Cyborgs', and his great line 'Everything Leaks'. I see it as going up the meta-algorithm hierarchy. Arbitrary algorithms & related brain architecture is induced via development. I had my own saying - 'the intelligence is in the data' or 'the intelligence is already out there'. More on this later.

Sameness & Difference
Both are necessary. However, since the environment gives us plenty of free deltas, and noise adds more, the brain can be biased to 'integrating' and 'unifying' functions. This saves on the workload and required resources.
With sensory deprivation we can see a limited balancing response, due to lack of deltas, by the human brain. This also suggests the above open-architecture of the human brain - it is not a wholly-specified 'unifying machine'. The behaviour is induced and there are higher meta-algorithms that determines architecture. 

Given the last 2 points; if your brain design has a diagram with boxes and arrows then in my opinion It's suspect. If an AI designer tries to specify the structure of a brain, they run the risk of all sorts of pathologies. For such a highly adaptive system they may end up being a necessary component of the machine itself - they will constantly have to intervene to reestablish intended structure and functionality. A cog in their own machine. As a human being, the designer (or engineering team) becomes the bottleneck, the weakest link, for such an extremely complex, high throughput machine.
I consider the necessary meta-algorithms to be a function of the agents embodiment, primarily. True, there is a downside: the cost of a less specified and more adaptive brain is the resource consumption required for self-organization.

Aside note: Jordan Pollack a while back (2001) had an article 'software is a cultural solvent' and I like that metaphor. Even more so, continued IT and broad technological advancement can be thought of as a material solvent. Briefly looking at the article again, he seems to allude to that. As above, I see that the brain in it's 'integrating' role can also be seen as a 'solvent'.

More is Better
Peter Norvig has recently pointed out that the performance of several traditional AI algorithms goes up dramatically after a certain threshold of size is reached. Training data set, training time, and machine size are all increased.
This lines up with my own work. Several years ago while I was very eager and active on my project, I got that terrible, sinking feeling as I began to realize what resources would be needed to make something with non-trivial performance. The human brain is massive for a reason. And this is unbound. 

That's it for now.

On Highway 420

On Highway 420  Intro This post is now over 8 years late. There are a lot of things I have not done, that should have been done. It ...