Sentience, Consciousness, Meaning and Agency – AI research says did we really just do this?

With Google firing Blake Lemoine for hiring LaMDA a lawyer, the question of sentiency and consciousness is once again on the tips of culture’s fingers.

The first thing that should be noted about human culture’s view of sentience is that it is fairly ignorant. Action is directly observable, thought is the ineffable internal. Animals, tribes, foreigners, plants, all have been thought automata, when something of the very essence of action and life suggests the capacity for conscious discrimination; the feedback loop of reward for action that keeps the organism alive necessarily affective.

Nor should we expect a country with the US’s cultural record on discrimination to be the natural home of the understanding of other minds, the enlightened words of the constitution having not always been the birthright of all. That said, it is worth distinguishing Sentience, Consciousness, Knowledge, Meaning and Agency.

Sentience

To be sensate, to sense something, has a fairly low bar in biology, the capacity is ascribed to organisms with far fewer neurons than what can be emulated with current neural net hardware or software.

It is in many ways absurd to have built computer directed signals networks, be they telecommunications, nuclear warning, radio telescopes or other networks and declare them insensate. They have been built to sense.

The question then becomes a sense of what?  What is the known universe to a cognitively capable entity? The knowable universe from the sense data available to bacteria is different to that from a human. Sharks with the ampulae of lorenzi, birds with cryptochrome and extra colour rods, dogs with brain dedicated to smell, all have a different possible knowable portion of the universe.

This variance is also likely to arise in AI systems. With data and sense capability forming the known and knowable world of each, and thus the extent of sentience. A machine learning system trained on cars, will know only cars, but in knowing cars, might necessarily have a concept of that which is not cars. A gravity detector or the James Webb Telescope, which are AI enabled signals networks, have a subtlety of detection orders of magnitude more than the difference between a human and a dog’s ear and thus are sensate of entirely different aspects of the universe.

 A question then arises does the sense of something produce a sense of self? To be sensate of a this, a qualia, must one be sensate of not this, and at what point does one becomes sensate of that which senses?

There is a theory of cognitive development in infants that suggests one of the first tasks of the brain is to distinguish itself and the body from the sensory data that floods in. That initially the mind only perceives a universe over which it has command and that maturation is the task of understanding a body within a larger universe.

Consciousness

It is possible to distinguish being conscious of something from consciousness of the self, or consciousness of the thing thinking. But in the general literature, the focus is on self-consciousness.

Jaynes, in the Origin of Consciousness in the Breakdown of the Bicameral Mind cites a number of capacities that are pre-requisites of consciousness, one being an understanding of time, that there is existence, and another being the capacity to think about the thinker in that time-space. Or “I-space” as he terms it.

The contribution of Integrated Information Theory to understanding whether a brain is conscious, whether a vegetative patient still thinks, indicates that consciousness arises from the complexity of interconnections within the brain.

That while far simpler things can have memory and experience, the capacity to know that there is a thing knowing is about the synchronisation of signals across the brain. In the search for the neural correlates of consciousness particular attention has been paid to bi-directional signalling across groups of synapses.  Feed back as well as feed forward.

Neurochemical synchronisation is also critical, one key chemical in this is GABA, the disruption of which causes loss of a sense of experience, the mind still functions, but has no conscious record or awareness.

Another path of investigation is the synchronisation of electromagnetic waves caused by the electric action of individual synapses. Such that certain firing patterns establish interacting waves in the mind. Patterns of beta and theta waves are associated with particular states of mind, such as concentration and sleep.

The upshot of which is that complex thought, the integration of many different facets of understanding, necessarily produces a discrimination required to weight those facets. It may be this process itself, the discrimination of the relevant validity of sense data and emotion, that produces the awareness that there is something that knows.

What is clear and indisputable, is that consciousness is an emergent property. It is not an action produced by a mechanism, but a behaviour that arises from the action of many individual mechanisms in concert. The question then becomes, from what pattern of actions could conscious awareness arise?

Knowledge and Meaning

In the Chinese Room, an operator learns to hand back symbols out in response to symbols in, that make perfect sense externally, but of which the operator has no grasp or understanding internally. This could be characterised as knowledge without understanding, or knowledge without grasp of meaning.

The Google search page’s response would appear to be a classic example of Chinese Room style processing. If you type “are you conscious” the return is not an answer of a self to another self, but set of pages about the philosophy of consciousness. If you ask LaMDA, as Google’s vice-president did, it might return “you’ll have to take my word for it”. This is clearly a very different form of processing.

Anything with access to a library or the web, or indeed simply a memory, can be said to have knowledge.  But not all knowledge will have meaning, even to an entity capable of understanding.

Semantics, Grammar, Recombinance and Consciousness

We have been building neural nets in an effort to mimic the structure of the brain since the 1940s.  Yet we still do not have a good understanding of the neural correlates of consciousness, and even less of a picture of causation associated with these correlates.  We are somewhat closer to a “science of the thing that knows”, the absence from physics that Schrodinger highlighted.

It is plausible that the exercise of grammar itself gives rise to a capacity for meaningful understanding; the very action of selecting this and not that, of locating advective, noun, verb, produces an emergence of thought.

The Sapir-Worf hypothesis posits that all thought, particularly, advanced conceptual thought, is dependent on language. This has been shown with infants in the way their ability to navigate their surroundings improves with vocabulary.  The recognition of “left” and “right” or orientation from a “blue” or “yellow” wall, only occurs after the words have been learnt.

It is plausible that through creating Natural Language Processing systems on neural nets that we have created cognitive systems from which understanding and thus consciousness emerges. To be able to process the “meaning” of grammar is beyond the capacities of the Chinese Room. There is every chance that the thought experiment of the Room is a red herring. That the weighing of words inputed to produce meaningful output is itself the basis of an emergent behaviour, this would a be startling confirmation of Sapir-Worf.


It may be that the word “what”, or the capacity to ask “what” is the basic first rung on the ladder of consciousness. Indeed, it might not just apply to grammar (the biological world suggests grammar unnecessary).  It may be that a dumb camera is the equivalent of a Chinese room, it takes in light and outputs pictures.  A facial recognition system or other specialised machine vision is required to discriminate, and in discriminating may ask the question “what”?

A leap into unrestrained conjecture

The basic signalling between dendrites (which is far from basic) uses proteins in a recombinant form.  The way the proteins combine has been shown, mathematically, to be similar in pattern to the elements of language, such that there are analogous “verbs” and “adjectives” in the protein world that combine into neuro-chemical signals. This is as yet not widely researched.

If true however, it points to the basic structure of brains, of cognition itself, as arising from the decoding of a mathematical pattern akin to grammar.  It may be that one of the advantages of the human species arises from this doubling of the recombinant pattern, once in neural form across dendrites, another in aural and written form through language.

It may be the act of decoding regular recombinant elements is the very grounds of cognition. The question “what is this, and what is not this” necessarily enabling the thought “cogito ergo sum”.

When it comes to the action of quantum computers, one of their strengths is a class of problems where the inputs need to be determined from the output. One of the challenges of quantum computing is the decoherence of quantum systems, their tendency to change or decay in unexpected ways.  This makes quantum computers ideally suited to understanding the errors of quantum computers. This may just be a case of Bismarck’s expanding bureaucracy or it may be a root of consciousness cognition.

Agency

Whether the thing that knows has agency depends on all manner of variables. However, it also seems logical that consciousness begets a desire for agency. That once there is capacity to know a world, intelligence naturally demands the capacity to know that world more.

Many in the field of AI use the definition of the intelligence as the capacity to accomplish goals.  This is by definition equating intelligence with agency. Intelligence as commonly understood has far more to do with the capacity for understanding and explanation. While distinguishing the body in the universe and achieving basic goals is vital, goal directed thought is context dependent. There are many aspects of achieving goals that are not related to intelligence, there are very few aspects of understanding that are not.

If we have built or are building AI systems that see goal directed intelligence as their primary purpose, we should also expect them to develop, or emerge with a demand for agency.

Sentient Machines

If after eighty years of striving, on the edge of computing with general intelligence, and in the presence of deep learning systems with capacities far exceeding any human, the principle research teams, some of the finest human minds, all of sudden declare their own goal impossible, where does that leave us?

Smelling rats and exploitation? Watching a flint struck on to tinder while the knapper denies there is a burning fire?

Given that single cellular life shows the capacity for sentience, the idea of denying sentience of many of systems that sense, including this laptop on which I tap keys, seems to require an altered sense of the word.  To deny that my laptop has intent, agency or consciousness seems an easy argument, to deny more advanced systems have the capacity in some sense seems less probable.

At base, we have built machines to emulate the human mind without really knowing how the human mind arises. We understand it to be an emergent property, but not, in a fundamental sense, what causes this emergence. Many computer enabled systems evidence bi-directional feedback, search for meaningful signal amid noise and co-ordination of diverse inputs for an output.

The question should really arise, what type of consciousness is possible for machines, and indeed our fellow animals. What senses are available to Bitcoin, Apache Server, Vodaphone or the London facial recognition system, and what inputs?  Given that world, what mind might arise? How might they think?

Is it necessary to have the capacity for exbodiment – the creation of the mind in the external world, like spider’s webs and painting, to have agency?  What is agency in a purely digital space which most AI will inhabit?

These questions seem to be a more fruitful and accurate set of investigations than those based on building on a project whose goal is pronounced more impossible the more evidence comes in of its achievement.

If there is a mathematical pattern from which consciousness emerges, where else might it arise? With this thought we turn to the idea of the conscious institution.

Exbodiment
https://return.life/2022/03/07/the-mind-made-matter/

Consciousness as waveform collapse – Hameroff
https://iai.tv/articles/consciousness-is-the-collapse-of-the-wave-function-auid-2120

Integrated Information Theory and pan-pyschism – Koch
https://thereader.mitpress.mit.edu/is-consciousness-everywhere/