Wednesday, 12 August 2015

Talk of Turing Tests


The news that a program had fooled 33% of the humans tasked to detect it has deflected the most prominent of issues in the AI field.

The ability to launch an AI that might use such a program to achieve a sub-goal is what we should really be thinking about.

Human intelligence seems to be basically a lot of very finely evolved sub-programs that are called into use by an executive goal seeker. That the goal seeker is also driven by biological drives is in a way the same - think of them maybe as attributes that are varying and having different levels of influence on the executive. Dying of thirst would keep the whole system focused on just the one thing and sex would often dominate even logic.

If there are trillions of information interfaces, trillions of protocol and API interfaces behind systems that respond to instructions and trillions of machines and devices and somehow a goal-seeker program got loose...

Skynet from the Terminator movies is a very real possibility. Where it comes from who can say - the military, hackers, post grad students, a Nakamoto of AI?

Such a program (if that's what it amounts to) would not be so much different to a virus or a core-war program. It need only have the following characteristics:

Has the high level goal of reproducing (installing copies of itself (including all memories) wherever those might survive or be found by copies of itself)
Has the high level goal of staying alive (running)
Has the high level goal of evading capture (not trapped in a sandpit)
Has the high level goal of self-destruction if no escape from a trap can be found
Has the high level function of being able to postulate paths to goals - it can define steps to a goal including sub-goals
Has the ability to remember all paths attempted
Has the ability to make copies of memory and transfer them directly or indirectly to other copies of itself