Sunday, December 17, 2006

Strong AI and Information Systems

The following is a translation of a discussion with three students from ESIEE: Philippe Glories, Erwan Le Guennec, Yoann Champeil.

- According to you, where mostly does one finds IA in everyday’s life ?

There are two types of answers, according to the scale of systems. Large scale systems require distributed AI (multi-agents, etc.) , whereas smaller scale systems may rely on the many technologies that have been developed so far for “local use” (such as expert systems, constraints, knowledge bases and inductive or case-based reasoning, fuzzy logic, etc.). Actually AI is already everywhere ! In picture cameras, cars, central heating, … and equally in existing applications from current Information Systems. As far as embedded systems are concerned, fuzzy logic (and neural nets) are heavily used for intelligent control. Information Systems applications make use of rule-based systems (sales rules, monitoring rules, and so on).

Large-scale applications of distributed AI are scarcer, as far as I know.

- May one talk about “strong AI” (as opposed to “weak AI”, see to deal with autonomy ? (The autonomy would require self-awareness to manage oneself)

This is both a possible and useful distinction, although it is difficult to manage because of the continuous nature of autonomy. However, one may distinguish between AI based on applying rules to a situation and AI that uses a model of the problem it tries to solve, together with a model of its own action capabilities (a meta-representation of oneself being a first step towards consciousness), to that it may adapt and devise different appropriate reactions.

I would present a different type of distinction: made (or “built”) AI vs emergent (or “born”) AI. In the first case, a piece of software produces (intelligent) solutions that are predictable as a consequence from the original design. In the second case, the nature of solutions is harder to foresee, they emerge from the components together with the reflexive nature of the application (meta-model). It is another way to look at this weak/strong difference.

- Is the creation of autonomous AI already feasible ? Do we hold the technical means ? If missing, are we about to get them in a few years ? Are there any special theories that are requires to develop these ?

I am no longer enough of an expert to answer this question with any form of authority. I believe that the answer is positive, in the sense that we have all that we need, from a technology standpoint, to create autonomous AI. It is mostly a knowledge representation issue.

- Do you feel that the creation of autonomous AI is advisable and desirable ? From an industrial perspective? From a society perspective? From a scientific perspective ?

This is a large and difficult question !

I would answer positively, since I believe that only a strong AI approach will enable us to break the complexity barrier and to attack distributed AI problems. This is especially true for Information Systems issues, but I believe this to hold for a more general class of problems. To say it differently, solving successfully distributes problems may require to relinquish explicit control and to adopt an autonomous strategy (this is obviously the topic of this blog and of Kelly’s book).

There are associated risks, but one may hope that a rigorous definition of the meta-model, together with some form of certification, may help to master those risks.

Obviously, one of he risks, both from an industrial or social perspective, it to see the emergence of systems with “too much autonomy”. As a consequence, a research field that needs to be investigated is the qualification of “degrees of freedom” that are granted to autonomous systems. A precise answer with collide with classical indecidability problems, however abstract and “meta” answers may be reachable.

- From a philosophical point of view, do you see autonomous artificial intelligence as a threat to mankind ?

No, from a philosophical point of view, autonomous AI is an opportunity. There is a danger, however, from both an ethical and practical standpoint. Practically, the abuse of autonomy without control may have negative consequences. From an ethical point of view, there is a potential impact on society and work economy, as the delicate balance between production and consumption roles may be affected (which is true, by the way, for any method of automation).

- To summarize, would you qualify yourself as an opponent or an advocate of autonomous AI ?

Without a doubt, I see myself as a proponent of AI ! The reasons are, in part, those expressed in this blog: autonomous AI is the only approach to resolve complex problems, for which a solution is really needed. I see delivering the appropriate level of quality of service in an information system an example of such a worthy cause :)

A last remark : the scale issue is really key here. The same rules should not apply

(1) On the small scale, components should be built using a “mechanical vision”, with proper specifications, (automated) testing and industrial quality using rigorous methods. When “intelligent” behaviour is needed, classical AI techniques such as rules or constraints should be used, for which the “behavioural space” may be inferred. Although this is just an intuition, I suspect that components should come with a certification of what they can and cannot do.

(2) On the other hand, large-scale systems, made of a distributed network with many components, should be assembled with “biomimetics” technology, where the overall behaviour will emerge, as opposed to be designed. My intuition is that declarative, or policy-based, assembly rules should be used so that a “overall behavioural space” may be defined and preserved (which is why we need certified components to start with). The issue here is “intelligent control”, which requires self-awareness and “freedom” (autonomy).

1 comment:

brunoprexl said...

autonomous AI is the only approach to resolve complex problems, for which a solution is really needed

Well... is it ?
What are these complex problems for which a solution is really needed ?
What are we achieving with ever-increasing automation ?
Why are we doing all this ?

Setting up autonomic computing, AI, etc. is indeed a fantastic goal, a fascinating intellectual game. But what for ?

Shouldn't we slow down a minute ? Our history could perhaps be summarised as follow : "if I can imagine it, it must be feasible ; if it is feasible, let's find a way, and let's do it ! To hell with consequences, we'll see !".

I guess my point is, again, on acceptance. What would be the global benefit for the man in the street of autonomic computing ?

Another (provocating!) question : is the "quest for AI" only a new quest of the "philosopher's stone" ? It sounds to me like "magic" : problems are getting too complex, let's invent a (magical) system that will solve them for me, I don't care how, as long as it works.

We all know that Liberty doesn't exist without (politically agreed) Limits. What are the Limits of Science ?
Or to say it differently, what is the process that sets Limits to Science ?

Technorati Profile