Artificial Intelligence and Robotics Onto the Next Gear

The Age of Artificial Intelligence, the age where intelligent machines play a central role in our society and in our economy is here. This is not science fiction or the prelude to a Hollywood movie; this will be the reality for most of us starting the later part of the twenty-first century. With search engines, bots, drones, and prototypes of self-driving cars, we can already perceive first glimpses of these agents of artificial intelligence. I do not wish to call for general alarmism but, rather, to only too briefly discuss what may be an important inflexion point in the history of human progress, as a handful before it. The coming-together, with enough industrial maturity for large-scale production, of our semi-conducting and digital technologies; our algorithmic and computational knowledge; and our mechanical technologies will usher a new age upon us. After the mastery of fire; agriculture and animal husbandry; the alphabet and abstract scripts; the forging and casting of bronze then iron; and the birth of empirical sciences and the industrial revolution, self-adapting artificial intelligence is, likely, the next driver of a great change to our way of life, our socio-political structures, and even our ethics. If this indeed turns out to be our next technological high-plateau, it will have, as those abovementioned that preceded it, many deep implications on all human societies.

Some of these implications will be on our professional endeavours and our socio-economic structures. Since the dawn of agriculture, some human beings have relied on the surplus production of others for material sufficiency. Agricultural surplus allowed for hierarchical social structures, but also for greater possibility of leisure; and from there, more time for speculation, aesthetics, knowledge development, invention, and discovery. In fact, manpower surplus has shaped social structures and traditions across the globe for millennia. Only, in the age of smart and adaptive robotics en masse, this economic way of being will be challenged, as the surplus on which we will depend shall increasingly come from technological agents. This will not happen overnight; it will be a gradual process. Nonetheless, a critical mass will be reached in the not-so-distant future, even in the services sector, and I fail to see how this will not precipitate important socio-political changes and reorganisations. Possibilities include more deflation stemming from gains in productivity (e.g. we can already take notice of numerous products and services that have become cheaper and more accessible over the past decade with digitisation and the Internet); the question of ownership of the distributed productive capacities (i.e. who will own these widely available smart robots, certain monopolies or all of us?); the problem of subsistence of low income populations, those who would be deprived from such smart agents and whose livelihood depends on providing services replaced by robots; and even, changes to the traditional ways of exchange of goods and services, including the notion of money itself. Furthermore, as it is unfortunately often the case with new technologies, intelligent robots will have direct consequences on the conduct of human warfare, as we already see with the increased usage of drones.

Other implications will be philosophical and ethical, with the increased dissociation between intelligence and consciousness on one side, and biological life on the other, even as far as the transfer and continued functioning of human consciousness and memory following biological death. In addition to the problems of ‘immortality’ it might generate, developed and self-recognising artificial intelligence will expose us fresh to ethical questions that have preoccupied some thinkers for millennia but remained fringe specialist subjects up until now. The problem of what makes personhood will become of a more general importance in society with the coming of enduring artificial consciousness and greater self-learning and self-adapting artificial intelligence. Equally, the problem of what makes someone human will emerge again with the emergence of alternative developed consciousness; and this problem will theoretically be as vivid as when Homo sapiens co-existed with its cousins of the Homo genus, such as Homo neanderthalensis, who equally had developed consciousness (not that Homo sapiens bothered then with what makes them human; they mostly cared about staying alive). This time around, we will be facing another co-existing consciousness, only of the ‘artificial’ kind. Again, these are not fanciful scenarios today but in the remit of where artificial intelligence can take us. The ethical and legal implications are evidently tremendous, and the other equivalent ‘ethical earthquake’ would be to come face-to-face with a sophisticated and conscious alien civilisation. Ironically, it is quite likely that we will create artificial consciousness before meeting any such outside civilisation.

On another aspect, the frontier between virtual and real will become blurrier with the expansion of an intelligent digital world. ‘Virtual reality’ and reality as we have envisaged it so far will be harder to distinguish. The body does not differentiate easily, especially without prior awareness of their origin, sensations triggered by virtual vs. real drivers. Furthermore, one of the main ways by which we distinguish the virtual from the real is in the prevalence of the latter. All of this is subject to change with widespread artificial intelligence. We only have to think about us living constantly in a self-adapting virtual environment initiated by our own; or think about what a robot finely imitating a baby would do to our parenting instincts. And if we have a tendency to anthropomorphise biological animals, it will indeed prove difficult for our conscious control to constantly alert us against sensations caused by a well-engineered virtual reality or human-like robots.

These are only some of the deep implications that an age with mass-scale developed and conscious intelligence will likely bring.



Words, Languages, and Disagreements

“We do not, in general, use language according to strict rules – we commonly don’t think of rules of usage while talking, and we usually cannot produce any rules if we are asked to quote them.”

“But what we are destroying are only houses of cards, and we are clearing up the ground of language on which they stood.”

Both quotes are from Ludwig Wittgenstein.

I guess one cannot talk about Language and its uses without being reminded of Wittgenstein and without giving credit back to him.

Whenever I am about to start a serious conversation with somebody on a particular subject, such as the existence of God, whether there is such a thing as Fate, which country is more democratic than the other, or whether I am more right-wing or left-wing, I often start the conversation by asking my counterparty what she or he means by God, by Fate, by more democratic, or by right- and left-wing. I do so not because I am a fan of rhetoric or as a way of tricking my counterparty in the discussion; I rather do it to simply avoid useless and protracted discussions that lead nowhere because each is holding a different definition of the same word as a starting point, while not admitting the possibility or existence of another definition that might be used by a different person. People often jump into such discussions, argue for hours, then ‘agree to disagree’ in the best-case scenarios; while in reality, they would be talking most of the time about slightly or largely different things using the same words, and hence their discussion has been futile all along. It is therefore important to know a bit about the genesis and the various uses of Language – this great enabler of our cognition.

Languages, as we commonly attribute them to human beings, evolved organically and in an unorganised manner, as much as human beings themselves. They started by taking rudimentary forms and then evolved with our general cultural, intellectual and technological evolution. No one person or one group sat and defined any natural (or ‘nomological’) language as we know it today. Languages contain definitions of words and verbs and rules of grammar; but there can be many definitions of one word or one verb, which themselves rely on other definitions of words and verbs, not minding the circularity of definitions, and there are almost always exceptions to the rules of grammar. As such, there seems always to be some degree of vagueness in the meaning of words, verbs, and sentences when we probe into them. Unlike logical languages, natural languages are as much living and evolving as our cultures and are an integral aspect of them. Words can point to objects, to phenomena we observe around, to emotions, to concrete ideas, or to very abstract ideas and musings. Words can have their source in observation, in feeling, in thinking, in intuition, or in pragmatic needs. We can employ words for very definite, basic uses and objects. And we can employ words to try to relate to confused and undefined things, not knowing ourselves what we are exactly looking to express by the words, or simply to fill a temporary hole or weakness in our current understanding of the world. Words can be borrowed from other languages and cultures, used according to their original use in the language from which they were taken, or used in a different manner, sometimes quite strange to the origins of the word. And all of this evolves as we evolve with time and in different geographies in a way that a same word can mean very different things in one place and one time in comparison with another. Even in the same place and time, there can be confusion, inexactness, and differences of meaning, not only in common social life but also in academic circles, which are supposedly more rigorous. God, Fate, Democracy, Right-wing, Left-wing, all are examples of such that we have given above.

Many instances of disagreements and misconceptions stem from the fact that we do not think through the words we use as much as we should or that we think that what we associate with a certain word is exactly what others associate with that same word. Worse, we sometimes divide into parties based on a certain position vis-à-vis a word; one would think at first that it is a division vis-à-vis what the word represents, but, in reality, when we ask for more details about the representation of the word, we find quickly so many differences in such a representation that the only matter of substance that is left is a blind division vis-à-vis the word itself. Take Capitalism or Free Will; these are notable examples of words that have divided us into two camps for long periods of time, while actually the meanings of these words evolving all the same. It seems that sometimes we like to oppose each other more so than understand what is that concerning which we are opposed, and words are another tool in this game.


Criticising the Hard Normative Stand in Philosophy

A historical idea has long existed that there is, or needs to be, a thought discipline that has monopoly over saying how various things of the world, of Existence, and of Reality, including Existence and Reality themselves, ought to be, and which types of questions, problems, and desires ought to be treated by which particular thought or practical discipline. Naturally, many philosophers gave Philosophy this crown jewel of thought; philosophy was and is still seen by them as possessing the monopoly of the ‘ought-to-be’. This is what is called in the discipline the ‘normative aspect’ of philosophy. Kant is for example famous for his contribution to Normative Ethics, Aristotle for seeking normativity in both Logic and Metaphysics, and Descartes for seeking normativity in Epistemology. In all cases, despite great, and in many cases indispensable, contributions, all these renowned thinkers remained short of their initial ambitions. However, the urge towards normative approaches in Philosophy is not only a historical, long-gone practice; it continues to exist in our more modern times with, for example, Habermas and his idealised thresholds (for legitimacy for example) in political theory and discursive agreement, or Popper and his several failed attempts at formalising his philosophy satisfactorily.

The dream of making Philosophy the driving normative discipline of Reality, Existence, and Knowledge has failed on multiple intellectual and practical accounts – we shall briefly talk about a few below.

On the intellectual front, David Hume, the eminent Scottish philosopher, whose ideas continue to find validity centuries later, was the first to formulate the difficulty with normative approaches in a clear manner in his famous ‘Is-Ought Problem’. Hume’s basic idea on the subject is simple: we observe the world around as it is, while when we seek to create norms (as that is what is sought by normative approaches), we are actually looking to talk about the world as it ought to be – whether be it from a moral, scientific, religious, political or other perspectives. For Hume, all of our ideas are based on our observations of the world as it is, and hence, there is no clear basis really in this jump we make from observing the world to saying how it ought to be; pretty simple and perplexing indeed.

Now for a practical account: historically, it is the other ‘lesser’ disciplines, as some might be inclined to call them, rather than Philosophy itself, that contributed more to what is at the core of the scope of normative approaches. For example, Physics contributed more to understanding Cosmogony than Philosophy, Formal Logic to understanding Epistemology, and Cognitive Sciences to understanding Phenomenology. But many philosophers still believe that Philosophy as a discipline is superior to Natural Sciences or Mathematics, or that it has the right to define what these other disciplines should treat and in which manner.

And thirdly, for an ontological account: since everything is interrelated and of the same fundamental nature, as I hold, and since all things are ultimately circular, then all events are ultimately at-par, and hence the idea of monopoly over norms of any discipline is a flawed one. There is no clear premise for why a series of events that has as role to analyse other events, as it is the case of any discipline of thought, should have some sort of fundamental superiority, or for that matter, be more influencing. And it is also likely that, because of the circularity, not only of the world of thought but also of the material world, all efforts towards developing detailed norms will remain ontologically inadequate (although existentially necessary for humans in some areas such as Ethics).

I am not trying to attack Philosophy in general in this short article; rather, I am directing criticism towards a particular form of arrogance in few philosophical circles in claiming monopoly over norms of the world of Thought, of Reality, and of Existence. Philosophy does remain the most adequate generalist discipline for asking the right questions and pointing to the specialists some of the areas on which it may be more worthy to focus. Philosophers can still attempt, and should still attempt, to hold together all the various thought disciplines in a whole that makes sense, no matter how much more difficult this has become of late with the advances of many specialised disciplines. The philosopher is a man or a woman who holds all the strings of human activity together; is torn between them; makes continuous efforts not to succumb one way or another; endures constant accusations of being wrong in what she/he does or says; and yet continuously perseveres. But centralising Thought does not mean monopolising it, and in a spherical geometry, there is no peak, there is no summit, but only interlinked parts.

On a final note for the specialists: the only normative ‘power’ out there, if there is one, is in Mathematics, in particular, in Set Theory, which is reducible to second-order logic. And even there, we still face the difficulty of ontological commitment, which we need not to detail on here.


How Important Concepts Can Disappear With Knowledge

There are many important concepts that we need to hold in order to make sense of the world around us; we give specific names to these concepts and we engage in important, deep discussions about them. These names refer in many cases to abstract concepts, and they serve as much needed ‘gap-fillers’ in our prevalent wording, reasoning, interpretation, and understanding of the world around us. These names therefore fill a pragmatic function in the context of us talking about, understanding, and consequently acting on the world around us.

But given the abstract nature of the concepts that hide behind the names, a bit of examination of the meaning these names hold with different people show a great diversity of opinions and views. It is as if everybody agrees to use the same conceptual names while most in reality disagree on the exact meaning of these names. The origins of these conceptual words are many times uncertain, and often, the current meanings of these words are a far cry from the original historical intentions for them–in other words, the meanings behind conceptual words do evolve, and this evolution largely explains the differences in the actual thoughts behind the words (or we could say the differences in noema). Some conceptual names can continue to be culturally transmitted over long periods of time. Such names become a cultural reality, a lingual and cultural tool, more so than the actual concepts behind them as first intended. Let us be specific: Causality, God, Free Will, Mind, Essence, are all examples of such important conceptual words that are very widely used and ‘believed in’ but with great on-going disagreement on their details.

With the advance of our knowledge in what concerns many of the details that hide behind such conceptual words, we continuously discover how shallow our definitions and uses of these words have been, and how unimportant and irrelevant some of these words become with our new state of knowledge. These names become cultural and historical artefacts, which we may continue to employ for convenience, but without substantial belief in them and no real epistemological value. The word Essence is an example of such; this word has had great philosophical and religious value for many centuries. Essence was an important concept in Greek philosophy and was perpetuated under different forms through the scholastic period and up until the early modern times. Saint Thomas Aquinas was for example particularly pre-occupied with the problem of essence in what concerns cannibalism. Today, the word Essence has ceased to occupy any serious position in modern philosophy. What is the reason behind? Our advances in the knowledge of the details of the world around us make of this word an unnecessary gap-filler to maintain. Almost all the key constituents of the word Essence, across its different possible meanings, were stripped from it with time and attached to other words that became more culturally predominant; in the process, the word Essence became void of any special meaning from an epistemological point of view. This dynamic is also happening with Causality, God, Free Will, and Mind. Many of what was encompassed by these words is being stripped out and given a more solid footing in other new concepts, both scientifically and culturally. The importance and the meaning of particular concepts change with the evolution of our complicated web of knowledge.

The unfortunate part is that many still refuse to admit the historical fact of this evolution of important abstract words and concepts. As they hold to old and obsolete views of the world, refusing knowledge and refined understanding, they continue to cling to these historical artefacts as if they were immutable and deified concepts–they try in any way possible to keep these words alive and relevant. There is no harm in continuing to use old conceptual words, as long as one is clearly aware of the actual epistemological value behind and not succumb to the illusion of some actual mystical reality that has no convincing basis. If I use the word Zeus in a fictional manner, it does not mean that I believe in Zeus or that the name and concept of Zeus are essential to my understanding of the world around me or for the validity of my knowledge system. It is the same for many old conceptual words.


Not Knowing Disturbs Us Deeply

We human beings seem to hate many things; but we adventure to guess that, across cultures and ages, there are two things that make it high on our list: we hate uncertainty, not being able to have some idea about the future, and we hate not having an explanation to why things happened the way they happened. We hate these two things so much that we are willing to accept mediocre interpretations and rationalisations, wrong theories, and even conspiracy theories, rather than admit that we have no idea about which course events will take in the future or that some dramatic events of the past happened for rather random reasons. It is more so the case when such events relate to us in deeply emotional manners (e.g. events of death, sickness, social crisis, or conflicts and wars). Not knowing everything, or at least not having some idea of the causal chain of events, disturbs us deeply, so deeply that many false theories, interpretations, and speculations continue to stubbornly infest our general reasoning despite scientific proofs of the contrary. Moreover, many vocal individuals continue to take advantage of our longing to have answers to everything around us in order to promote and sustain false theories and speculations.

Here is what we are not inferring from the above: we are not inferring that everything is uncertain; we are not inferring that there are no conspiracies in the world; we are not inferring that we do not possess the rational power to understand and interpret most events around us successfully; we are not inferring that some interpretations, despite them being imperfect, can not constitute a practical basis for some action; and we are not inferring that we should hold an attitude of complete scepticism and paralysis. What we are rather inferring is that our deep psychological need for answers and interpretations can often turn us blind to the objective merits or limitations of many of the theories we hold. Let us take an example: why are we all set to die and what happens after we die? This type of interrogation is important and hotly debated – we find in human history, across all of its cultures, abundance of soothing theories and interpretations to answer such an interrogation. The question is understandably important to any living being; and it is because we refuse to be satisfied with answers of the type “we simply do not know” or “death is an integral part of the living process” that we came up in history with an enormous amount of different (and many times contradicting) interpretations. And while there is nothing wrong in having interpretations (while admitting that we may possibly be wrong about them), we find many cultures that hold so tight and so blindly to their particular interpretations that they refuse to leave any room for possible negation. It is exactly this that we condemn.

Knowledge comes with varying degrees of certainty, and Science has some fair idea of the degree of certainty of each piece of knowledge it holds. Science also knows that some things can and do happen for random (or purely circumstantial) reasons. And Science has also shown many questions to be simply false or invalid ones. There are also many things to which we do not have answers today even with a minimal degree of certainty; all reasonable people should be willing to admit this fact and not rush towards soothing but wrong interpretations for the sake of just ‘feeling better’.


Are We All Naive At Some Level?

“[…] a large proportion of our positive activities depend on spontaneous optimism rather than mathematical expectation […] a spontaneous urge to action rather than inaction […]” John Maynard Keynes.

We live in a very intricate and continuously surprising world. We constantly seek answers to our questions, we often discover new things, and when we do so, our old views of the world and of us seem naive and silly. We often look at other animals and we find them to be simple in their behaviour and their reasoning, and we may even pity them for that. In our mind, we are superior because we are less naive. But, when we realise major discoveries or undergo a major revision in our thinking, we feel the same way towards ourselves; we see our past selves with almost the same aura of naivety with which we perceive more seemingly simple creatures.

Naivety is a definite weakness of cognition most of the times. We reject a romantic view that praises natural naivety in the name of some bedevilling of knowledge and progress. But does naivety lead to bad or sub-optimal behaviour one hundred per cent of the times? Ironically (and unfortunately) we have to say no… We all operate with some form of naivety at some level and in what concerns some subject matters. It can be in our political, social, or religious views; it can be in our human relationships to each other; for the specialists, it can be in thinking their area of speciality superior to others; for business executives or investors, it can be in their economic reasoning or in liking to see indefinite trends where there is none. Curiously, it is some of this spontaneous naivety that makes us do things we otherwise would not have done; and while, most of the times, little positive comes out from these spontaneous actions, knowledge and progress can ensue in few cases unintentionally. Didn’t we discover the New World through such a dynamic?

When it comes to the subject of naivety, we can think of two great dangers (at least): (1) not knowing or refusing to admit that each of us has some propensity to be naive in some particular area; and (2) letting ourselves be manipulated by others who know how to exploit particular naiveties we have.

Aren’t we all a bit as was said about the famous Don Quixote de la Mancha:

“But is it not a strange thing to see how readily this unhappy gentleman believes all these figments and lies, simply because they are in the style and manner of the absurdities of his books? […] apart from the silly things which this worthy gentleman says in connection with his craze, when other subjects are dealt with, he can discuss them in a perfectly rational manner, showing that his mind is quite clear and composed; so that, provided his chivalry is not touched upon, no one would take him to be anything but a man of thoroughly sound understanding.” Cervantes, Don Quixote


Individualism vs. Selflessness

In the realm of political and social affairs, we can likely regroup almost all inclinations across times and cultures into three categories: Individualism, Elitism, and Selflessness.

For the individualist tendencies, every human being is an end in itself; for the elitist one, there is a a certain class of individuals (the definition of which varies from one particular system to the other) who has priorities or privileges over the others and necessitates the others to labour for its benefit; and for the selfless tendency, it is actually the broader collectivity of humans that counts, some state, or some institution, more so than particular individuals. We find many examples of each in history:

  • Individualism includes thinkers like Descartes, Kant, Fichte, Berkeley, the Existentialists, Liberalism, Anarchism, and Protestantism.
  • Elitism includes aristocratic thinkers such as Aristotle and Nietzsche, Feudalism, some closed sects and religions, plutocracies, some forms of Capitalism, Scientism, and all of forms of ethnic and racial segregations.
  • Selflessness (also holistic) includes Pantheism, Buddhism, Spinoza, Hegel, Schopenhauer, Nationalism, Hobbes, Rousseau (politically but not morally, Rousseau is notorious for his contradictions), Catholicism, Islam, Marxism and Communism, and the Russian, Chinese, and many Asian cultures generally.

Today, we continue to have a war of the titans between Individualism and Selflessness. Elitism continues to exist, but in more hidden and less declared manners as it is commonly not ‘politically correct’ to ascertain any longer in many developed societies.  We can particularly observe a divide of ‘style’ between the West (individualism, particularly after the Reformation and especially after the failing of European nationalism and the growth of globalisation) and the East (long history of collectivity whether in the religious or social sphere).

We are not laying a judgement here on each genre as, candidly, neither is necessarily superior to the others – typically it all comes down to the details. We had excesses and beauties in both Individualism and Selflessness.