- The meaning of human level AI is fuzzy. In the absence of a clear definition, the question is not really meaningful.
- Human level AI is not a unitary construct. An individual's conception of the world is greatly affected by the types of tasks she knows to perform.
- Humans are not logical in the manner that logic can be programmed into software. Although the paper isn't about an AI that mimics humans, we're going against the grain of the one example of intelligence out there.
I do believe that an intelligent agent has to have some embodiment external to its representation of its world. In other words, an agent should be able to take in uninterpreted sensory information (even if it is symbolic), translate it into a format that is more amenable to manipulation, and take actions on that format. Communication with other agents must be one of the many actions the agent is required to perform. Without this overall architecture I don't think there is any way we can build a truly intelligent communicative agent.
Lenat's overall flaw is in believing an intelligent entity can be created by manually representing facts. I believe intelligence is seen where an entity is able to operate and adapt to an environment, and communicate with other agents. These facts are an afterthought, a story we have put together to explain how we do things. It isn't how we actually do things.
So, what happened to AI? I think it is only necessary in entities that have to be truly autonomous or embodied. Otherwise we'd generally be better off with the story of facts weve so far been creating.