Artificial Intelligence and Robotics Onto the Next Gear

The Age of Artificial Intelligence, the age where intelligent machines play a central role in our society and in our economy is here. This is not science fiction or the prelude to a Hollywood movie; this will be the reality for most of us starting the later part of the twenty-first century. With search engines, bots, drones, and prototypes of self-driving cars, we can already perceive first glimpses of these agents of artificial intelligence. I do not wish to call for general alarmism but, rather, to only too briefly discuss what may be an important inflexion point in the history of human progress, as a handful before it. The coming-together, with enough industrial maturity for large-scale production, of our semi-conducting and digital technologies; our algorithmic and computational knowledge; and our mechanical technologies will usher a new age upon us. After the mastery of fire; agriculture and animal husbandry; the alphabet and abstract scripts; the forging and casting of bronze then iron; and the birth of empirical sciences and the industrial revolution, self-adapting artificial intelligence is, likely, the next driver of a great change to our way of life, our socio-political structures, and even our ethics. If this indeed turns out to be our next technological high-plateau, it will have, as those abovementioned that preceded it, many deep implications on all human societies.

Some of these implications will be on our professional endeavours and our socio-economic structures. Since the dawn of agriculture, some human beings have relied on the surplus production of others for material sufficiency. Agricultural surplus allowed for hierarchical social structures, but also for greater possibility of leisure; and from there, more time for speculation, aesthetics, knowledge development, invention, and discovery. In fact, manpower surplus has shaped social structures and traditions across the globe for millennia. Only, in the age of smart and adaptive robotics en masse, this economic way of being will be challenged, as the surplus on which we will depend shall increasingly come from technological agents. This will not happen overnight; it will be a gradual process. Nonetheless, a critical mass will be reached in the not-so-distant future, even in the services sector, and I fail to see how this will not precipitate important socio-political changes and reorganisations. Possibilities include more deflation stemming from gains in productivity (e.g. we can already take notice of numerous products and services that have become cheaper and more accessible over the past decade with digitisation and the Internet); the question of ownership of the distributed productive capacities (i.e. who will own these widely available smart robots, certain monopolies or all of us?); the problem of subsistence of low income populations, those who would be deprived from such smart agents and whose livelihood depends on providing services replaced by robots; and even, changes to the traditional ways of exchange of goods and services, including the notion of money itself. Furthermore, as it is unfortunately often the case with new technologies, intelligent robots will have direct consequences on the conduct of human warfare, as we already see with the increased usage of drones.

Other implications will be philosophical and ethical, with the increased dissociation between intelligence and consciousness on one side, and biological life on the other, even as far as the transfer and continued functioning of human consciousness and memory following biological death. In addition to the problems of ‘immortality’ it might generate, developed and self-recognising artificial intelligence will expose us fresh to ethical questions that have preoccupied some thinkers for millennia but remained fringe specialist subjects up until now. The problem of what makes personhood will become of a more general importance in society with the coming of enduring artificial consciousness and greater self-learning and self-adapting artificial intelligence. Equally, the problem of what makes someone human will emerge again with the emergence of alternative developed consciousness; and this problem will theoretically be as vivid as when Homo sapiens co-existed with its cousins of the Homo genus, such as Homo neanderthalensis, who equally had developed consciousness (not that Homo sapiens bothered then with what makes them human; they mostly cared about staying alive). This time around, we will be facing another co-existing consciousness, only of the ‘artificial’ kind. Again, these are not fanciful scenarios today but in the remit of where artificial intelligence can take us. The ethical and legal implications are evidently tremendous, and the other equivalent ‘ethical earthquake’ would be to come face-to-face with a sophisticated and conscious alien civilisation. Ironically, it is quite likely that we will create artificial consciousness before meeting any such outside civilisation.

On another aspect, the frontier between virtual and real will become blurrier with the expansion of an intelligent digital world. ‘Virtual reality’ and reality as we have envisaged it so far will be harder to distinguish. The body does not differentiate easily, especially without prior awareness of their origin, sensations triggered by virtual vs. real drivers. Furthermore, one of the main ways by which we distinguish the virtual from the real is in the prevalence of the latter. All of this is subject to change with widespread artificial intelligence. We only have to think about us living constantly in a self-adapting virtual environment initiated by our own; or think about what a robot finely imitating a baby would do to our parenting instincts. And if we have a tendency to anthropomorphise biological animals, it will indeed prove difficult for our conscious control to constantly alert us against sensations caused by a well-engineered virtual reality or human-like robots.

These are only some of the deep implications that an age with mass-scale developed and conscious intelligence will likely bring.

JHTF

Advertisements

Criticising the Hard Normative Stand in Philosophy

A historical idea has long existed that there is, or needs to be, a thought discipline that has monopoly over saying how various things of the world, of Existence, and of Reality, including Existence and Reality themselves, ought to be, and which types of questions, problems, and desires ought to be treated by which particular thought or practical discipline. Naturally, many philosophers gave Philosophy this crown jewel of thought; philosophy was and is still seen by them as possessing the monopoly of the ‘ought-to-be’. This is what is called in the discipline the ‘normative aspect’ of philosophy. Kant is for example famous for his contribution to Normative Ethics, Aristotle for seeking normativity in both Logic and Metaphysics, and Descartes for seeking normativity in Epistemology. In all cases, despite great, and in many cases indispensable, contributions, all these renowned thinkers remained short of their initial ambitions. However, the urge towards normative approaches in Philosophy is not only a historical, long-gone practice; it continues to exist in our more modern times with, for example, Habermas and his idealised thresholds (for legitimacy for example) in political theory and discursive agreement, or Popper and his several failed attempts at formalising his philosophy satisfactorily.

The dream of making Philosophy the driving normative discipline of Reality, Existence, and Knowledge has failed on multiple intellectual and practical accounts – we shall briefly talk about a few below.

On the intellectual front, David Hume, the eminent Scottish philosopher, whose ideas continue to find validity centuries later, was the first to formulate the difficulty with normative approaches in a clear manner in his famous ‘Is-Ought Problem’. Hume’s basic idea on the subject is simple: we observe the world around as it is, while when we seek to create norms (as that is what is sought by normative approaches), we are actually looking to talk about the world as it ought to be – whether be it from a moral, scientific, religious, political or other perspectives. For Hume, all of our ideas are based on our observations of the world as it is, and hence, there is no clear basis really in this jump we make from observing the world to saying how it ought to be; pretty simple and perplexing indeed.

Now for a practical account: historically, it is the other ‘lesser’ disciplines, as some might be inclined to call them, rather than Philosophy itself, that contributed more to what is at the core of the scope of normative approaches. For example, Physics contributed more to understanding Cosmogony than Philosophy, Formal Logic to understanding Epistemology, and Cognitive Sciences to understanding Phenomenology. But many philosophers still believe that Philosophy as a discipline is superior to Natural Sciences or Mathematics, or that it has the right to define what these other disciplines should treat and in which manner.

And thirdly, for an ontological account: since everything is interrelated and of the same fundamental nature, as I hold, and since all things are ultimately circular, then all events are ultimately at-par, and hence the idea of monopoly over norms of any discipline is a flawed one. There is no clear premise for why a series of events that has as role to analyse other events, as it is the case of any discipline of thought, should have some sort of fundamental superiority, or for that matter, be more influencing. And it is also likely that, because of the circularity, not only of the world of thought but also of the material world, all efforts towards developing detailed norms will remain ontologically inadequate (although existentially necessary for humans in some areas such as Ethics).

I am not trying to attack Philosophy in general in this short article; rather, I am directing criticism towards a particular form of arrogance in few philosophical circles in claiming monopoly over norms of the world of Thought, of Reality, and of Existence. Philosophy does remain the most adequate generalist discipline for asking the right questions and pointing to the specialists some of the areas on which it may be more worthy to focus. Philosophers can still attempt, and should still attempt, to hold together all the various thought disciplines in a whole that makes sense, no matter how much more difficult this has become of late with the advances of many specialised disciplines. The philosopher is a man or a woman who holds all the strings of human activity together; is torn between them; makes continuous efforts not to succumb one way or another; endures constant accusations of being wrong in what she/he does or says; and yet continuously perseveres. But centralising Thought does not mean monopolising it, and in a spherical geometry, there is no peak, there is no summit, but only interlinked parts.

On a final note for the specialists: the only normative ‘power’ out there, if there is one, is in Mathematics, in particular, in Set Theory, which is reducible to second-order logic. And even there, we still face the difficulty of ontological commitment, which we need not to detail on here.

JHTF

The Power of Mental Shortcuts

When we attempt to render human cognitive abilities special, or try to shed a light on what makes our mental capabilities different or ‘better’ than other cognitive capabilities around us (be it in other animals or in artificial machines), we can think of many elements: a higher level of consciousness, a developed memory, or advanced analytical and logical capabilities and with wide scope. Yet, it is the power of our mental shortcuts that is of crucial importance and is often omitted. These mental shortcuts have been given many names: intuition, heuristics, problem solving tools etc. In fact, what still makes humans capable of producing some things that artificial machines are not able to achieve today is not necessarily due to their ‘intelligence’ or their ‘memory’; it is rather due to their ability of approaching mental data efficiently through cognitive shortcuts, their ability to translate problems into equations quickly and efficiently (equations that machines are more capable of solving than us), and their ability to represent cognitively a wide variety of things they experience .

This does not mean that our mental shortcuts are always right (many shortcuts do lead to errors in judgment and behaviour in some situations). And the origins of these shortcuts are partly instinctual, partly developed with age, and largely altered by experience and by the environment. This makes of mental shortcuts a difficult subject of understanding and its scope a very wide one. Mental shortcuts are key constituents of what we call cognitive models, which much of our knowledge and conception of Reality and Existence depend upon.

Historically, Henri Bergson envisaged two types of intelligence: one analytical, which operates by reducing a problem into smaller pieces, analysing each piece, and going by conjunction, and another more intuitive intelligence, inscribed in duration (la durée réelle ou la durée créatrice), where everything is considered as a flow and not as a sum of parts as in the case of analytical intelligence. Curiously, Bergson saw analytical intelligence as a hallmark of the human intelligence (alongside intuitive intelligence that we share with animals) and made it the cause of many of our fallacies and weaknesses of understanding. In reality, our recent knowledge in the field of cognitive sciences actually points to the contrary: we have very powerful mental shortcuts, very powerful heuristics; we approach our experiences in a nimble and short-circuited manner more so than in an analytical one. In our mental shortcuts lies a good deal of our greatness but also our faults.

JHTF