Friday, December 6, 2013

Putting together GPU Workstation

I am setting up a gpgpu workstation for prototyping the neural nets. It's an interesting time in history, this will likely be the last digital, von Neumann machine I use for this work. It is a good, fast number cruncher, but it is the wrong type of machine for ANN's. It's a bad fit. 4 little islands of 4/6gb ram spinning wheels hyper-fast. The interconnects slow as mud in comparison. PCI3.0 doesn't come close, honestly PCI4.0 won't make any real difference.
My ideal is one, extremely large, homogenous 'brain'. That gives me much less work to do. Now I will have to come up with algo's to deal with very fast, slow-connect sub-nets. I will be using compression techniques to communicate between the gpu's and find ways to architect around the hardware. The 4/6gb sub-nets will have to adapt with minimal information from the rest of the network. Trying a rasterizing technique won't be efficient since the buses and HD's will be far too slow for the gpu's and their speed will not be utilized.

We are now coming into the era of neuromorphic hardware, Qualcomm is releasing "zeroth" processors, and I imagine in short time other gear in labs will become commercially available.
From now on the bulk of processing will be analog. Quite a change. When I was a kid I loved the 'Science & Invention' encyclopedia set my Dad got. When I was looking at the analog computer entry I thought 'how old fashioned'. Digital was new, modern, space-age and high-techy. Analog was rusty and a kludge. Digital has had the spotlight for decades now. But with the work of Carver Mead and his students (Sarpeshkar, Boahen, etc), analog is not just 'in vogue' but is inevitably the permanent winner for the majority of this type of low power, high throughput processing. To be clear, it's not just analog, but mixed analog/digital (mostly analog).

I'm also going to give litecoin mining a shot. If I can make back some money and pay off all this gear, then great. I could use it. So I bought the gpu's and other stuff early, I was going to bargain shop next year, but the litecoin hype has got me going. It's late getting into the game, we'll see what happens.

Wednesday, January 16, 2013

A.I. Design Notes 1

Here is a brief outline of some of my views on AGI design which I'll expand on in time.

Noise is Necessary.
Noise is an essential ingredient for any reasonably intelligent system. This is less controversial to say now than a few years ago given recent findings in several fields. Maybe you can have limited success in limited domains with a non-noisy brain, but that's about it. And the deltas that come from a rich, complex environment can provide a lot for what is needed in an agent brain. But again I suspect that the limitations will show up and become more pronounced as time goes on.
We know for example via implants that there is an immediate loss of fidelity in hearing if the implant is not a noise-sensitive circuit. As an agent becomes more advanced and the brain becomes larger, I believe this is definitely the case.

Open Architecture & Self Organization.
One take on this idea I heard about is Andy Clark's 'Natural Born Cyborgs', and his great line 'Everything Leaks'. I see it as going up the meta-algorithm hierarchy. Arbitrary algorithms & related brain architecture is induced via development. I had my own saying - 'the intelligence is in the data' or 'the intelligence is already out there'. More on this later.

Sameness & Difference
Both are necessary. However, since the environment gives us plenty of free deltas, and noise adds more, the brain can be biased to 'integrating' and 'unifying' functions. This saves on the workload and required resources.
With sensory deprivation we can see a limited balancing response, due to lack of deltas, by the human brain. This also suggests the above open-architecture of the human brain - it is not a wholly-specified 'unifying machine'. The behaviour is induced and there are higher meta-algorithms that determines architecture. 

Given the last 2 points; if your brain design has a diagram with boxes and arrows then in my opinion It's suspect. If an AI designer tries to specify the structure of a brain, they run the risk of all sorts of pathologies. For such a highly adaptive system they may end up being a necessary component of the machine itself - they will constantly have to intervene to reestablish intended structure and functionality. A cog in their own machine. As a human being, the designer (or engineering team) becomes the bottleneck, the weakest link, for such an extremely complex, high throughput machine.
I consider the necessary meta-algorithms to be a function of the agents embodiment, primarily. True, there is a downside: the cost of a less specified and more adaptive brain is the resource consumption required for self-organization.

Aside note: Jordan Pollack a while back (2001) had an article 'software is a cultural solvent' and I like that metaphor. Even more so, continued IT and broad technological advancement can be thought of as a material solvent. Briefly looking at the article again, he seems to allude to that. As above, I see that the brain in it's 'integrating' role can also be seen as a 'solvent'.

More is Better
Peter Norvig has recently pointed out that the performance of several traditional AI algorithms goes up dramatically after a certain threshold of size is reached. Training data set, training time, and machine size are all increased.
This lines up with my own work. Several years ago while I was very eager and active on my project, I got that terrible, sinking feeling as I began to realize what resources would be needed to make something with non-trivial performance. The human brain is massive for a reason. And this is unbound. 

That's it for now.

On Highway 420

On Highway 420  Intro This post is now over 8 years late. There are a lot of things I have not done, that should have been done. It ...