2016-12-16

Meaning, quantum process and inscrutability

The analogy between meaning and measurement in quantum mechanics is something that has been on my mind for quite a while, as attested by a couple of posts from the early years of this blog. I'm therefore walking here an old path, but with a couple of new things in mind, including a quite radical shift in my viewpoint on signification since 2005, and the current lively debate around inscrutability of machine learning algorithms. The following points sum up where I stand today.

Meaning is a process

The Web has been a wide-scale experience in applied semantics, and more and more, in applied semiotics. Our interaction with the Web is using signs, the primordial and main ones being those weird identifiers called URIs. For years, I have, with many others, struggled with the thorny issue of what those URIs actually identify, or denote, or mean, or represent, spent hours in endless debates with Topic Maps and Semantic Web people to figure the difference or similarity between subjects and topics of the former, and resources of the latter. Eventually fed up with those intractable ontological issues, I decided to keep definitely agnostic about them, to focus on the dynamic aspects.

To the question What does it mean? I answer now It means what it does. In other words, the meaning of a URI on the Web is a process, whatever happens when you use it. This process can be technically described and tracked. It includes query processing, client-server dialogue, content negociation and federation, distributed computing, and more and more artificial intelligence. But from the end-user viewpoint, the URI are now hidden under the hood, the interface with the Web using natural language signs like words and sentences, written or spoken, and more and more those application icons on the touchscreen of our mobile devices, simple signs bringing us back to hieroglyphs and magic symbols. Meaning on the Web is the (more and more complex) processing of (more and more simple) signs.

Is this conception of meaning specific to the Web? If one looks closely, the answer is no. Meaning of (often simple) signs outside the Web is also the result of a (often complex) process. Whatever its nature, a sign means nothing outside a process of signification. The Web has simply given us an opportunity to explore this reality in-depth because we have engineered those process, whereas outside the Web those process are given, we use them without question on a daily basis, and we are not aware of their complexity. The more complex the Web is becoming, and the simpler the signs we use to interact with it, the closer it seems to our "natural" (read : pre-Web) semiotic activity.

Meaning process is similar to quantum process

The evolution of the Web is also tackling the difficult issue of meaning in context. The process triggered by the use of a sign is almost never the same. The time of the query, the nature state of your client device, the state of the network, your user preferences, interaction history and rights of access, the content negociation ... make every other URI resolution a unique event. Among all possible meanings, only one is realized.

Here comes the analogy with quantum mechanics. Among all possible states of a system, of which probability distribution might be known with great accuracy, only one is realized in any quantum event. Before the event, the system is described as a superposition of all its possible states. The reduction of this pack of possibles to one realization is technically called collapse of the wave function.

Samely, before you sent a query using a sign, before you click your email icon application, everything is possible. You might have mail or not. Your spam filter might have trashed an important contract. Whatever happens means the collapse of all possible states, but one. This collapse process defines the meaning of the sign at the moment you use it.

The same way in natural conversation you would say "Will you pass me the bowl?" and of all the possible meanings of "bowl" in your interlocutor's mind, all will collapse to zero but the one which indicates the only bowl sitting on the kitchen's table in front of you.

Both meaning and quantum process are inscrutable, and it's OK

The inscrutability of reference has been discussed in depth by Quine in Word and Object (1960). Quine wrote mostly before our world of pervasive information networks, before the Web, and although he died at the eve of the 21st century, he did not write anything about the Web, unless I miss something. Which is too bad, because "Word and Object" in the framework of the Web, and singularly the Semantic Web, translates easily into "URI and Resource", but maybe Quine was a bit too old in the early days of the Web to apply his theories to this new and exciting field.

Therefore, unless I miss something, Quine did not address the reference in the dynamic aspect we discuss here. Reference is inscrutable because it's a process which involves each time a sign is used a very complex and (either in theory or in practice) inscrutable process. In human natural interpretation of signs, this meaning process involves several parts of our brains and perception/action systems in a way we just barely figure. The signs we send to the network are and will be processed in more and more complex and practically inscrutable ways, such as the machine learning algorithms we already see implemented in chatbots.

Quantum process have been known since about one century ago to be inscrutable, although some of its famous founders did not like this frontal attack against determinism at the very heart of the hardest of all sciences. Albert Einstein among others was a fierce opponent to this probabilistic view of the world, defended by quantum mechanics orthodox interpretation, and used a lot of time and energy to defend without success some "hidden variable theory". Inscrutability was here to stay in physics. It seems also here to stay in semiotics, and in information systems. This is a singular convergence, which certainly deserves to be further considered and explored.

[Further reading might include works by Professor Peter Bruza (Queensland University of Technology, Brisbane, Australia) such as Quantum models of cognition and decision or Quantum collapse in semantic space : interpreting natural language argumentation.]

2016-11-18

The right tension of links

By 1990, at the dawn of the Web, Michel Serres was publishing Le Contrat Naturel (further translated into English as The Natural Contract). In this book the philosopher makes a strong and poetic evocation of those collective ventures where contracts are materialized by cords, lines, ropes, such as sailing and climbing. Those lines link people not only to each other, but to their apparatus (sails, winches, harnesses and spikes) and to the harsh natural elements with which they are engaged (wind and waves, ice and rocks). In high sea as in high mountain, in order to ensure the cohesion and security of the team, the lines need to be tightened. And, adds Serres, this tightening is not only a safeguard, it's also a condition for the line to convey information, in a way which is more immediately efficient than language in situations where you can't afford delays in appreciation of situation and decision. If the line is too slack, you do not feel the sail and the wind, you lose connection with your climbing mate. On the other hand, excessive tension means opposition and risk of breaking the line, and being tightly connected must not impede movement. Michel Serres does not mention martial arts, but in his excellent "guide for beginners" Aikido From the Inside Out, Howard Bornstein has similar thoughts in his chapter dedicated to connection. Connection has to be maintained just at the right level of tension, by feeling what he calls the point of first resistance.
When you connect like this, you become one with your partner in a very real, experiential way. When you move, your partner moves, at the same time and in the same direction. You are really one, in terms of movement. Your experience of movement is basically the same as if you were moving entirely by yourself.
Of course, understanding in theory those general principles will not make you an experienced sailor, climber or martial artist. You will have to practice and practice to get the quality of touch enabling you to keep the lines at the right tension, making everyone safe and giving you this wonderful feeling of being one with your teammates, partners, and the world around you.

Our online experience should abide by the same rules. All the links we are texting should be of the same quality as those of sailors, climbers and martial artists, enabling us to move together. In the stormy events we are facing, we need more than ever to reduce the slack in our connections. 

2016-10-19

More things in heaven and earth

Horatio : 
O day and night, but this is wondrous strange!

Hamlet : 
And therefore as a stranger give it welcome.
There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.

Horatio would certainly be as bewildered as we are today by the evergrowing number and diversity of things modern science investigation keeps discovering at a steady pace. A recurrent motto in science papers and articles I stumbled upon lately is more than expected, as the following short review illustrates, traveling outwards from earth to heaven. 

New living species, both living and fossil ones, are discovered almost on a daily basis in every corner of our planet, from the soil of our backyards to the most unlikely and remote places, and more and more studies suggest there are way more to discover than we already have. But the number of living things might be dangerously challenged by the growing number of artificial ones, products of our frantic industry cluttering our homes, backyards, cities and eventually landfills.

Even if a very populated one, our small planet is just itself a tiny thing in the universe, among a growing number of siblings. The number and variety of bodies in the Solar System, as well as the distance we can expect to find them, have been growing beyond expectations. Closer to us, a survey of impacts on the Moon over seven years has yielded more events than expected based on previous models of the distribution of small bodies in the inner Solar System. Images of the solar atmosphere by the SOHO coronograph has yielded an impressive number of spectacular sungrazing comets. And missions to planets have unveiled a wealth of amazing landscapes, comforting hopes to discover life in some of them.

Beyond the exploration of our home stellar system, the discovery of thousands of exoplanets did not come as a real surprise (our star being an exception would have been a big one), but there again we begin to discover more than expected, from an earth-sized planet around the star next door to improbable configurations such as planets orbiting binary stars. Moreover, free-floating, or so-called rogue planets, not tied to any specific star, are certainly cruising throughout our galaxy, and although very few of them have so far been actually detected, due to the extreme difficulty of such observations, some studies suggest they may outnumber the "regular" planets, those orbiting a star. Regarding stars themselves, the most recent catalog contains over one billion of them, which is less than 1% of the estimated total star population of our Milky Way galaxy, while new studies tend to indicate that the number of galaxies in the observable universe is at least one order of magnitude higher than previously thought. Even exotic thingies such as merging black holes, of which detection is now possible based on the transient ripples they create on space-time (aka gravitational waves) appear to be more frequent than expected. And the universe has certainly more in store, including the infamous missing mass, dark matter of which nature remains unknown.

The sheer number of objects unfolding in the depths of space and time is well beyond the grasp of human imagination and cataloguing power, not to mention philosophy. But fortunately the modern Horatio gets a little help from his friends, the machines. The overwhelming tasks of data acquisition, gathering and consolidation, identification, classification, cataloguing, are now more and more delegated to machines. Artificial intelligence, and singularly machine learning technology is beginning to be applied to tasks such as classifying galaxies or transient events. Using such black box systems for scientific tasks is stumbling again on issues linked to inscrutability, which we addressed in the previous post. Scientific enquiry is a very singular endeavour where whatever works is not easily accepted and the use of inscrutable information systems can be arguably considered as a non-starter. 

There are more and more things indeed in heaven and earth that we know of, and we are more and more eager to accept the unknown ones we discover every day. But the ones our poor imagination might be forever unable to fathom are those new ghosts haunting our intelligent machines. Are we ready to welcome those strangers?

[Edited, following +carey g. butler's comments to strikethrough above intelligent. Let me be agnostic about the fact that machine learning systems (or whatever systems to come) are intelligent or not, because I don't know what intelligent means exactly, be it natural or artificial. The "ghostly" point here is inscrutability.]

2016-10-13

I trust you because I don't know why

The ongoing quick and widespread development of neural networks and deep learning systems is triggering many debates and interrogations both practical and conceptual. Among various features of such systems, the most arguable ones are certainly inscrutability and fallibility. A deep learning system builds up knowledge and expertise, as natural intelligence does, by accumulation of experience of a great number of situations. It does better and better with time. But the drawback of this approach is that you can't open the box to understand how it achieves its expertise as you would do with a classical step-by-step algorithm (inscrutability), and the expertise is not 100% proof, it's bound to fail from time to time (fallibility). I've written on some philosophical aspects of those issues, and how they relate to ancient Chinese philosophy (in French here). 

A recent article in Nature entitled "Can we open the black box of AI" presents a very good review of those issues. And the bottom line of this article comforts me in the opinion that either all this debate is moot, or that it is not linked to this specific technology, and not even to any kind of technology. All the debate is to know if we can trust something we don't understand and which is, moreover, bound to fail at some point. This seems to fly in the face of centuries of science and technology development all based on understanding and control. 

Do we control and understand everything we trust? Or more exactly, do we need to understand and control before we trust? Most of the time, no. As children, we trust our parents and adult world to behave properly without understanding the why's and how's of this behavior. And if, growing up, we start trying to question those why's and how's, it might happen that for some reason we lose that trust. When I trust a friend to achieve what she promised, I won't, or a least I should not, try to control and check if she will do it or not, and how. Trust, in fact, if exactly the opposite of control. You trust because you can't afford to, or have not the technical or conceptual tools to, or simply believe it's useless, counter-productive or simply rude to understand and control.
That line of thought applies to more simple things that people. If I cross a bridge over a river, I don't check, and do not understand, most of the time, how it's built. I begin to check it if for some reason it seems poorly built, or rotten, looking like no one else has used it for ages. You trust food you eat because you trust your provider, you generally don't check the food chain again and again. You start to check when you suspect this chain to present some serious point of failure. It's not check before trusting, it's check because for some reason you don't trust anymore. The other way round is called paranoia.
Most of the time, you trust things to work safely as expected because so far they mostly did work safely. Based on experience, not logical analysis of how it works.This includes, and actually begins with, your own body and brain. Looking further at the world around you, you discover black boxes everywhere, and it's all right. Starting to check and control how they work is likely to lead you in some infinite recursion of effects and causes, and you will either reasonably stop at some point saying "well, it's gonna be all right", or pass the rest of your life lost in metaphysical and ontological mist, and fear of any action.
Let's face it. We trust before and without understanding and controlling. Every second of every day. And most of the time it's OK. Until it fails, at some point. We know that it will. We trust our body and brain in order to live, although we know they are bound to break down at some point. We are aware that things and people we trust are bound to fail once in a while. That's just how life goes. Parents have a second of distraction and a child dies crossing the street. Friends are stuck in a traffic jam, don't show up on time and miss their flight, bridges collapse in sudden earthquakes, hard drives break down, light bulbs explode, lovers betray each other ...
Despite of our awareness of such risk of failure, we keep trusting, and call this hope. Without trust we lose hope, and fall into depression and despair. This is a basic existential choice : trust and live, or try to control and understand everything, ask for total security, and despair because you can't find it. We trust each other although, and actually because, we don't know why. And knowing that each of us will eventually fail some day, if only once at this ultimate individual failure point which is called death, should make each of us more prone to forgiveness. 

Let me borrow those final words from the brand new and unexpected Nobel Prize in Literature

Trust yourself
Trust yourself to do the things that only you know best
...
Trust yourself
And look not for answers where no answers can be found
...

2016-08-30

Immortality, a false good idea

Immortality is trendy. According to some so-called "transhumanists", it is the promise of artificial intelligence at short or medium term, at the very least before the end of the 21st century. Considering the current advances in this field, we are bound to see amazing achievements which will shake our very notions of identity (what I am) and humanity (what we are). If I can transfer, one piece after another, neuron after neuron, organ after organ, each and every element which makes my identity into a human or machine clone of myself, supposing this is sound in theory and doable in practice, will this duplicate of myself still be myself? The same one? Another one? And if I make several clones, which one will be the "true" one? Do such questions make any sense at all? All this looks really like just another, high-tech, version of the Ship of Theseus, and our transhumanists provide no more no less than the ancient philosophers answers to the difficult questions about permanence and identity this old story has been setting, more than two thousand years ago.
None of those dreamers seem to provide a clear idea of how this immortality is supposed to be lived in practice, if ever we achieve it. A neverending old age? Not really a happy prospect! No, to be sure, immortality is only worth it if it goes with eternal youth! And even so, being alone in this condition, and seeing everyone else growing old and die, friends, family, my children and their children, does not that amount to buying an eternity of sorrow? Not sure how long one could stand that. But wait, don't worry, our transhumanists will claim, this is no problem because just everybody will be immortal! Everybody? You mean every single one of the 10 billion people expected to be living by 2100? Or only a very small minority of wealthy happy few? But let's assume the (highly unlikely) prospect of generalized immortality by 2050. In that case it will not be 10 but 15 billion immortal people at the end of the century if natality does not abate.That's clearly not sustainable. But maybe when everyone is immortal, there will be no need to have children anymore, and maybe even at some point it will be forbidden due to shrinking resources. Instead of seeing your children die like in the first scenario, you will not see children anymore. Not sure which one is the worst prospect!
Either way, alone or all together, immortality is definitely not a good idea. And if it were, life would have certainly invented and adopted it long ago. But since billions of years, evolution and resilience of life on this planet despite all kinds of cataclysms (the latest being humanity itself) is based on a completely different strategy. For a species to survive and evolve, individual beings have to die and be replaced by fresh ones, and for the life itself to continue, species have to evolve and eventually disappear, replaced by ones more fit to changing conditions.
So let's forget about actual immortality. We have many technical means to record and keep alive for as long as possible the memory of those who are gone, if they deserved it. To our transhumanists I would suggest to simply make their lives something worth remembering. It's a proven recipe for the only kind of immortality which is worth it, the one living in our memories.

[This post is available in French here]

2016-03-15

Handwriting questions and answers

Why stick to handwriting? 
It's so painful and slow! 


There are so many efficient technical ways to write and communicate now.


It might be good for the museum and art school, for poetry and diaries. 
But why should I bother?


My handwriting is so ugly anyway. 
Why should I show it at all and why should I make others suffer from deciphering it?


But it shows too much about me ... I don't want it to be analyzed by graphologists.

In other words ...

2016-03-13

In praise of handwriting

This started by a post shared by +Teodora Petkova, suggesting to share handwriting, I found the idea was cool, so I started a Google+ collection. For the record here is my today contribution - complete with spelling mistake (thousands of times)


2016-01-14

Otherwise said in French

I have started with the new year a kind of mirror of this blog in French, a long overdue return to my native language. I hope some readers of in other words will be fluent enough in French to also make sense and hopefully enjoy those choses autrement dites. The first posts are listed and linked below, with a short abstract.
  • Toute chose commence par un trait on Shitao, the unity of painting, calligraphy and poetry in classical Chinese culture, and how the continuum of nature is divided into things by the single brushstroke. 
  • Cosmographie en orange et bleu a "just so story" about the separation of heavens and earth as seen and described by the first ontologist in the first days, and what happened to him on the seventh day.
  • L'ontologiste sur le rivage des choses on the illusion of so-called ontologies (in the modern sense of the term) thinking they have said what things are, when they only define how things differ from each other.
To be continued ... stay tuned!

2016-01-05

Desperately seeking the next scientific revolution

If you still believe in the ambient narrative on the accelerated path of scientific and technological progress, it's time to read You Call this Progress? on the excellent Tom Murphy's blog Do the Math. I'm a bit older than the author, just enough to have seen a few last but not least scientific achievements of the past century happening between my birth and his one. The paper of Watson & Crick on DNA structure was published in Nature a few days after I was born. My childhood time saw the discovery of the cosmic microwave background and general acceptance of the Big Bang theory, experimental confirmation and acceptance of the plate tectonics theory. While I was a student the standard model of microphysics was completed. Meanwhile chaos theory, of which mathematical premices had been discovered by Poincaré at the very beginning of the century, was setting the limits of predictability of natural systems evolution, even under deterministic laws.

This set of discoveries was somehow the bouquet final of a golden age of scientific revolutions which contibuted to our current vision of the world, starting in the 19th century with thermodynamics, theory of species evolution, foundation of microbiology, electromagnetism unification, followed at the beginning of the 20th century by relativity and quantum mechanics, two pillars for our current understanding of microphysics and cosmology, from energy production and nucleosynthesis in stars to structure of galaxies and visible universe at large. Put together, those revolutions spanning about 150 years from 1825 to 1975 set the basis for the mainstream scientific narrative, giving an awesome but broadly consistent (if you don't drill too much in the details, see below) account of our universe history, from Big Bang to galaxies, stars and planets formation and evolution, our small Earth and life at its surface, bacteria, dinosaurs and you and me. A narrative we've come to like and make ours thanks to excellent popularization. We like to be children of the stars, and to wonder, looking at the night sky, if we are the only ones.

As Tom Murphy clearly arguments, this narrative has not substantially changed since 40 years, and has not seriously been challenged by further discoveries. Many details of the story have been clarified, thanks to improved computing power, data acquisition, and spatial exploration. We've discovered thousands of exoplanets as soon as we had the technical ability to detect them, but that did not come as a surprise, and in fact what would have been really disturbing would have been not to discover any. The same lack of surprise happened with gravitational lenses first discovered in 1979 but predicted by general relativity. And no new unexpected particle has been discovered despite billions of dollars dedicated to the Large Hadron Collider, the largest experimental infrastructure ever built.

Could that mean that the golden age of scientific revolutions is really behind us, and all we have to do in the future is to keep on building on top of them an apparently unbound number of technological applications? In other words, that no new radical paradigm shift, similar to the ones of the 1825-1975 period, is likely to happen? Before making such a bold prediction, it would be safe to remember those famous for having proven wrong in the past in pretending that there was nothing new to be discovered.

Actually, major issues already known by 1975 are still open. In physics, the unification of interactions needs to solve strong inconsistencies between relativity and quantum theory, an issue with which Albert Einstein himself struggled until his death, not to speak about the mysterious dark matter and dark energy needed by theory to account for the accelerated expansion of the universe. The latter is actually one of the rare important and unexpected discoveries of the end of the 20th century. In natural science, the process of apparition of life on Earth has still to be clarified, as well as the correlative issue of the existence of extraterrestrial life.

The number of scientists and scientific publications since 1975 has kept growing exponentially, as well as the power of data acquisition, storage and computing technology. With no result comparable in importance for our understanding of the universe to what Galileo discovered in the single year 1610 simply by turning the first telescope towards the Moon, Venus and Jupiter. The general process of science and technology evolution in the past has been that improved technology and instrumentation yields new results pushing towards theoretical revolutions and paradigm shifts. But strangely enough, the unprecented explosion of technologies since half a century has produced nothing of the kind.

Is it really so? Some scientists pretend that there actually is a revolution going on, but as usual mainstream science establishment is rejecting it. This is for example the position of Rupert Sheldrake in this article of 2012 The New Scientific Revolution. Indeed, the theories Sheldrake is defending, such as Morphic Resonance and Morphic Fields, are really disruptive and alluring, but refuted as non-scientific by the majority of his peers. I'm not a biologist, so I won't venture in this debate, and let readers make their own mind about it.