Intervengo per difendere l’AI da alcuni attacchi che sta subendo in questo periodo:
- assalto alla diligenza
- attacco dei teorici
- attacco alla mancanza di spiegazioni dei risultati.
In questo post mi occupo del primo, degli altri in post successivi.
Ci sono anche altri attacchi, riguardo alle questioni etiche, su cui c’è una sezione specifica e quindi lascio ad altri la parola alla difesa.
Scrivo in inglese cosi da alimentare il dibattito anche a livello internazionale.
A discussion arose on Tuesday in Brussels about what is an AI application. Somebody was suggesting that using drones in agriculture to optimize the use of pesticides, based on weather conditions or other sensory data, is AI.
To me that sounded more an Operation Research problem, or at most a Machine Learning problem if one is given past data with corresponding best choice.
Similarly, the algorithm that learns when to turn off machines in a datacenter according to load predictions from the past is ML but it is not AI.
AI to me means the ability to perform tasks that require human intelligence. None of these tasks can be done well by humans, so they do not qualify as AI if done by machines.
Generalizing, most Big Data analytics tasks might not be considered AI, since by definition the human mind is limited and cannot absorb large amount of data.
It is true that humans can be aided for example in decisions by statistical analysis of those data. Statistics can be a quite effective tool for identifying correlations. But there is no spark of intelligence in doing this. Sometimes humans, looking at some of these correlations, might find some of them surprising or unexpected: in that case there is a human intelligence value added. If a system might learn to pick those unusual cases, that would qualify as intelligence. Remember though that Giuseppe Longo proved a theorem that given sufficient amounts of data, one can find arbitrary correlations. Hence it is not the ability to find correlations that matters, but rather the ability to hand pick those that make sense to us.
On the other hand, humans are quite good at absorbing huge amounts of sensory data (e.g. movies) and understanding what goes on, relate to previous experiences, draw conclusions, answer questions about what they saw.
None of these tasks can be currently be performed well by algorithms. They definitely qualify as tasks requiring intelligence.
So where is the border between true AI sistems and simply statistical or even plain ML tasks?
This is not just an academic question, because it hinges on the issue of which research should be funded within an AI funding initiative. Since funding for AI is quite limited, one should avoid to classify as AI things that pertain to other research topics. Otherwise we risk that funding would go to activities that are already well funded elsewhere.
Today for example a representative of the Data Value project came to an AI project funding meeting praising the relevance for the initiative of extracting knowledge from data. While we do apply AI techniques for extracting knowledge from unstructured data and doing entity linking, I do believe that AI should aim at extracting competence, not just knowledge.
Knowledge by itself is sterile, an AI system should be able to act on such knowledge.
I am observing this phenomenon of crowds of people or companies that are jumping into the bandwagon of AI, just because it is fashionable, but proposing to do things that they have been doing already under a different title.
This however may cause that money supposed to be destined to AI gets wasted.
So I want to stress that Machine Learning is a technique used in AI, but not all ML applications are AI applications.
Jan LeCun in a recent interview said that intelligence requires learning, hence there is no intelligence without learning. Therefore AI requires Machine Learning, but the opposite is not necessarily true.
As a cae in point, a recent article in Nature points out to problems in the application of statistical methods:
Expert statisticians are aware of the problems, but researcher that apply statistics in other disciplines should learn not to overestimate the reliability of the techniques they use.
Most of the experts interviewed agree that our human mind is not well suited to interpret statistical data. Our mind is wired to induce causal effects from our expeRIences, because that is what allows our species to survive, to make reliable prediction about what is going to happen.
That is al so at the orange of religions: our mind has difficulty at adapting to the fact there is nothing that has caused our world to exist.
So we often fall in the fallacy of interpreting statistical correlation as a cause-effect relation. In complex phenomena like those of intelligence, there is rarely such link.