Sunday, July 20, 2008

Whatever Happened to AI

I just finished reading Lenat's "The Voice of the Turtle: Whatever Happened to AI". It's in the Summer 2008 issue of AI Magazine, more precisely Volume 28, No. 2. In it he asks what happened to the premise of producing human level AI. There are several knee jerk responses that immediately come to mind, given that I've spent a bit of time working on AI:
  1. The meaning of human level AI is fuzzy. In the absence of a clear definition, the question is not really meaningful.
  2. Human level AI is not a unitary construct. An individual's conception of the world is greatly affected by the types of tasks she knows to perform.
  3. Humans are not logical in the manner that logic can be programmed into software. Although the paper isn't about an AI that mimics humans, we're going against the grain of the one example of intelligence out there.
Lenat is extremely intelligent, but quite opinionated on the right way to achieve intelligence. I disagree with his approach, primarily because I don't think of intelligence that has arisen out of nothing. Intelligence is a facility that has developed in response to particular evolutionary pressures. In our case, I believe it is to deal with increasing complexity of social structures. The application of social intelligence to other more general problems is a happy accident.

I do believe that an intelligent agent has to have some embodiment external to its representation of its world. In other words, an agent should be able to take in uninterpreted sensory information (even if it is symbolic), translate it into a format that is more amenable to manipulation, and take actions on that format. Communication with other agents must be one of the many actions the agent is required to perform. Without this overall architecture I don't think there is any way we can build a truly intelligent communicative agent.

Lenat's overall flaw is in believing an intelligent entity can be created by manually representing facts. I believe intelligence is seen where an entity is able to operate and adapt to an environment, and communicate with other agents. These facts are an afterthought, a story we have put together to explain how we do things. It isn't how we actually do things.

So, what happened to AI? I think it is only necessary in entities that have to be truly autonomous or embodied. Otherwise we'd generally be better off with the story of facts weve so far been creating.