thank you so much, these are wonderful starting points on which to expand.
In our discussion we were addressing issues which are really broader than this, arriving at arguments of Foucauldian (but not only) memory, which are really important in these years’ technological scenarios.
If I had to expand on the points that you list, I would include the principle of citizen-partner, pointing at creating the pre-requisites according to which the citizen is a partner to the public administration, which does not assume a paternalistic role.
This is a fundamental approach in the ways in which a modern government should approach technologies.
It is complex, requires education, open, accessible and usable knowledge, accessible infrastructures, adequate policies, cultural evolution pushed through adequate measures, political representatives and a cultural class which can be an example to people, and many more things.
Most of all it requires an intervention on imagination, to enable people to imagine a condition which is different from now. And, of course, this is a sort of “chicken and egg” problem. Which can be resolved y considering that the famous “chicken and egg” question is not well placed: it is the wrong question. Because both the chicken and the egg evolved from something different that was there before. For this we must drive evolution, instead of disruption. And evolution, in this sense, is as much cultural as technological. So we must not allow for the danger of confronting with these issues only from a technological point of view, but to include cultural, artistic, aesthetic and, in a word, humanistic approaches, as fundamental part of the process.
And, in this sense:
I think that this would also go in the wrong way, leading to a paternalistic view of what a government should do. We should really address more serious, wider issues here. Not think about these micro-solutions, but first confront with the cultural and aesthetic (which means “perception”, “sensibility”) implications, which are the ones which will drive change. Only then we will know how we can design a technology which could offer effective support to a complex condition such as dependance from alcohol. (because of course there is wide evidence of the inefficacy of prohibition and these blocking patterns, which would only achieve to augment informal, untracked markets)
And about this:
<<This is an enormous limit about all of those theories which imagine that they can capture a human’s essence and reproduce it into some form of computational construct, imagining that what is in our skull defines it. Which is obviously not true>>
Yes sure! Let’s extend the discussion in private, so we don’t clog the discussion here.
In the meanwhile one article that would be worthwhile sharing is this:
While I don’t fully agree with the Epstein (and if you want I can also explain why), the article achieves an important objective by adding the psychological discourse into the discourse of AI.
Because there is no evidence that backs up the fact that what we call “I” (including personality) actually resides in the box of your skull, or even within your body.
On the contrary, there is plenty of psychological and neurological evidence about the fact that phenomena such as personality, conscience and intelligence are networked, relational phenomena, also including non-human elements (animals, nature, objects).
This consideration alone would be sufficient in invalidating most of the tentative approaches towards AIs.
In this I just love Gregory Bateson’s expression of “it takes two to know one”. Which aims at the wider question: if, for example, “intelligence” is the system, we must at least become more certain about where the system ends, what are its minimum, necessary boundaries that we want to take into account if we want to be able to comprehend it.