The “return of Artificial Intelligence” is an impressive trend of the blogosphere. I spent quite some time and pleasure reading the two great posts from Tim Urban’s blog WaitButWhy entitled “The AI Revolution: The Road to Superintelligence” and “The AI Revolution : Immortality or Extinction” (part 1 and part 2). The core of these posts is the difference between ANI (Artificial Narrow Intelligence, what used to be called weak AI), AGI (Artificial General Intelligence, or strong AI) and ASI (Artificial Superintelligence). These posts are strongly influenced by Ray Kurzweil and his numerous books, but do a great job of collecting and sorting out conflicting opinions. They show a consensus in favor of the emergence of AGI between 2040 and 2060. I strongly recommend reading these posts because they are entertaining, actually quite deep and offer a very good introduction to the concepts that I will develop later on. On the other hand, they miss the importance of perception, emotions and consciousness, which I will address in this post.
- Speculating about AI algorithms today as a way to achieve strong AI is hazardous since these algorithms will be synthesized.
- True intelligence requires senses, it requires to perceive and experience the world. This is one of the key lesson from biology in general and neuroscience in particular from the last decades, I do not see why computer AI would escape this fate.
- A similar case may be made about the need for computer emotions. Contrary to what I have heard, artificial emotions are no more complex to embed than computer reasoning.
- Self-consciousness may be hard to code, but it will likely emerge as a property of the next generation complex systems. We are not taking about giving a “soul to a computer” but letting free will and consciousness of oneself in relation to time and environment become a key perceived feature of tomorrow smart autonomous systems, in the sense of the Turing test.
1. Artificial Intelligence is Grown, Not Designed
2. A Truly Smart Artificial Intelligence Must Experience the World
3. Learning and Decisions Require Emotions
To continue on what we can learn from biology and neurosciences, it seems clear that computers needs to balance different types of thinking to reach decisions on a large range of topics, in a way which will appear « intelligent » to us humans. A lot of my thinking for this section has been influenced by Michio Kaku’s book “The Future of the Mind”, but many other references could be quoted here, starting from Damasio’s bestseller “Descartes’ error”. The key insight from neuroscience is that we need both rational thinking from the cortex and emotional thinking to take decisions. Emotions seem mostly triggered by “pattern-recognition” low level circuitry of the brain and the nervous system. This distinction is also related to the system 1 / system 2 description of Kahneman. We seem to be designed to mix inductive and deductive logic.
4. Consciousness is an Emerging Property of Complex Thinking Systems
- Self versus environment: the robot, or autonomous AI, is able to understand its environment, to see and recognize itself as part of the world (the famous “mirror test”).
- Awareness of thoughts: the robot can tell what it’s doing, why and how – it can explain its processing/ reasoning steps
- Time awareness : the robot can think about its past, its present and its future. It is able to formulate scenarios, to define goals and to learn from what actually happens compared to its prediction
- Choice consciousness: the robot is aware of its capability to make choices and creates a narrative (about its goals, its aspirations, its emotions and its experiences) that is a foundation for these choices. “Narrative” (story) is a vague term, which I use to encompass deductive/inductive/causal reasoning.
5. Concluding Thoughts
- First, it is clear now that weak AI, or ANI, is already there in our lives, and has been progressing for the last twenty years making these lives easier. The two articles from Tim Urban and Kevin Kelly that I mentioned in that post give a detailed account with plenty of evidence. I can also point out James Haight post “What’s next for artificial intelligence in the enterprise?”. Kevin Kelly emphasizes the advent of “AI as a service”, delivered from the cloud by a small set of world leaders. I think he has a fair point, there is clearly a first move/scale advantage that will favor IBM, Google and a few other large players.
- However, there are more opportunities than “smart thinking in the cloud”, (weak) AI is everywhere and will continue to be ubiquitous. Machine learning is already here in our smartphone and the next decades of Moore’s Law mean that connected objects and smart devices will be really smart.
- The race towards strong (or at least stronger) AI is on, as illustrated by the massive investments made by large players in that field. The next target is NLP (natural language processing) which is within our reach because of the exponential progresses of computing power, big data (storage capacity and availably of data) and deep learning algorithm.
- This is very disruptive topic. I do not agree with Kelly’s optimistic vision in his paper, nor with Ray Kurzweil. The disruption will start much earlier than the advent of the strong AI stage. For instance, the tidal wave of ANI may cause such a havoc as to make AGI impossible for decades. This could be either for ethical reasons (laws slowing down the access to AGI resources because of the concerns with what “weak” AI will be already able to do in a decade) or for political reasons (the turmoil created by massive jobs destructions due to automatization).
- Emotion and senses are part of the roadmap towards strong AI (AGI). Today’s focus is on cortex simulation as a model for future AI, but everything, from cognitive science to biology, suggests that it’s the complete nervous system from brain to body that will teach us how to grow efficient autonomous thinking. This is actually easier to state in a negative form: AI designed without emotions, through a narrow focus on growing cognitive and deductive thinking by emergent learning will most probably be less effective than a more balanced “society of minds” and almost certainly very hard to control.
- Consciousness will emerge along the way towards strong AI. It will happen faster than we think, but it will be more progressive (dog-level, child-level, adult-level, god-knows-what-level, …). Strong AI will not grow “in a box”, it will grow from constant and open interactions with a vast environment.