Can digital socialism save neo-classical economics and other questions

  1. Classical and neo classical economics

Classical economics aimed to understand pricing and supply of goods through analysing the costs of production (land, wages, tax, materials, etc…) Neo-classical economics turned the viewpoint round and tried to analyse price and supply through the (marginal) preferences of aggregated consumers.  The customer is king in retail and neo-classical economics.

2. Some seductive simplifications

The great thing about classical and neo-classical economics is a reductionist approach to economic equations. Both make the economy appear simple enough for a clever person to work out the mathematics with a pencil and a slide rule.

Economics was itself seduced by Newton. The idea of classical physics and mechanistic equations held a high place in the mind of the 19th century. And indeed, linear mechanistic equations are a hallmark of neo-classical economics (most famously the demand curve).

The paradigm holds that in aggregate people and firms can be seen as Rational, to maximise their utility/profit rationally and to be possessed of perfect information (Weintraub). Any departure from this leads to the excruciating detail of micro-economics and unpredictable non-linearity. Which is not comforting.

The paradigm also has a tendency to depict closed, linear systems. Firms are treated as bounded, as are national economies. But perhaps the most curious feature of this, is that it leaves out any role for the creation of debt.  There is no account for purchasing power created by banks.

Instead this explained away by a form of double entry book-keeping type thinking where the creation of debt in one place always balances with the creation of credit in another. Leaving a nice neat zero in the maths (though often a huge hole in the finances). (cf. Keen)

3. Non-sensical implications

The difficult journey for neo-classical economics was that it started as a new an insurgent way of explaining the value and movement of goods.  But in doing so through the idea of consumer preference found itself talking not about the value of goods but the motivations of people.  Indeed it might be said to have moved from reductionist economics to a one-dimensional psychology.

The attempt to explain consumer behaviour by price, rather than movement of goods by price is almost an inevitable over-reach driven by the demands of the theory.

Further the combination of deifying the consumer’s taste whilst sweeping debt under the rug produces a very curious approach to assets. Wherein they find that price is value, and that value is, based on the premises, ultimately rational.

However, the rational price today is not the rational price tomorrow.  This plain fact is difficult for the theoretical framework because a la demand curves, all markets clear (number of sellers and buyers match) and that this creates an equilibrium of supply and demand.

As some have noted, that the fact that an equilibrium price may exist for a market does not necessarily mean that consumers and producers will together find that happy balance, but rather, might oscillate around a point of balanced supply and demand ad infinitum.

4. Making neo-classical economics make sense

Clearly, it would be of great advantage to have a clear picture of the movement of goods within a society.  The approach of neo-classical economists is often to say that their predictions are wrong not because their model is imperfect but that reality is imperfect. Or more specifically, the national legal/policy framework is causing a distortion.  

Taking this view to heart, the first step would be to end the creation of capital by any means other than profit.

If finance was to exist it would be rationally allocated by a system with the capacity to predict and secure payment of all debts.

Further if all the marginal preferences of consumers were held on a database (as is the aim of advertising data) then a balance between forward production and forward consumption could be planned for.  This system would be in possession of as near perfect information as possible and as per the theory, open to all consumers. This would allow the satisfaction of consumer preference without the need currency.

And then voila, you might have an economy that matched the neo-classical picture. It would be a “de-centrally planned economy”. Somewhat Hegelian.

5. Rationality

Many of the assumptions in neo-classical economics, and the other enlightenment era ideas are founded on the notion of rationality.  However, once again, rather than taking a nuanced view proceeding from the ideas of logic, rationality in economics is taken to be somewhat generalisable, largely from the incentive of price.

In classical economics the concern is for the most part with rational production and consumption (Land, food, ships).  The move to consumer preference, and to a wider range of consumption leads us into taste.  If one consumer’s notional utility from purchase of a good is greater than another’s, then the least we can say is that their rationalities have produced a different value.

If we go into non-rational consumption, culturally mediated value in symbols (£40 to 20p Nylon shirt with a logo on) we rapidly stray from a behaviour that can be explained, deduced or extrapolated from our understanding of the movement of basic goods. We are into an eco-anthro-pyschonomics.

There is the idea that computers might make us more rational.  But the next generation of artificial intelligence, built of neural nets, would appear as changeable and flighty as its creators, on whom the pattern of information processing is modelled. The mathematics of intelligence may be in itself and by necessity an oscillation between rational and “exploratory” or “experimental”.   

6. A Scaffold to Absurdity

Like many of our institutions, the pressure to revise in the face of new possibilities is existential. If we continue to look out from and build within these old frames of reference, we will be vulnerable not simply to our own mistakes but to the predation of others with better understanding.

The maths does not have to be done on paper, the data does not have to be taken in aggregate, behaviour does not have to be understood through one dimensional incentives, the system does not have to be seen as closed. And of course, if we direct our entire focus of understanding resource allocation through a paradigm created for the production and consumption of physical goods the planet will strain to the point where billions die.

Neo-classical economics is more than a century old.  It grew from a discipline known as political economy.  In the recombinant analytics of culture it is probably time to reintegrate this elegantly simple strain of thought into a fuller picture that drives both understanding and allocation of resources in the post-Kantian world.  

Would Karl Marx buy BitCoin

Marx believed that the way society produced goods of value (base) determined its law and influenced its culture (superstructure). He identified three historic bases that societies have used, labour (slavery), land (feudalism), capital (capitalism).  What we are witnessing in the contemporary era, is a rapid evolution of technology leading not only to a change in the means of production, but a competition to develop superior means of production using the means of production (computation, the processing of data in meaningful patterns – the semantic economy).

One oft cited symptom of the post-modern malaise is the stagnation of wages even in the face of rising productivity since the 1970s.



Another quite curious feature is massive valuation attributed to companies with no profit, but a trajectory towards a becoming monopoly platform for a particular demand (Amazon and books, Airbnb and accommodation, Facebook and sociability).  Most of these tech companies pay no dividends.

So here we see that share of the economy delivered to both capital and labour is falling, while the bulk of the economic gains from an increasingly efficient economy go to those who organise information well (50% of the gains in value of the S&P can be attributed to a handful of tech companies).  The raft of insider trading scandals in finance and accounting, Libor, Gas, audits, are by and large focused on manipulating information for profit. Indeed Stiglitz Nobel prize winning work on information asymmetry and price seems to have created an entirely new business paradigm in the financial and commodity markets.

Given this move towards value from refined information, would Marx buy possibly the most insurgent element of this new order, BitCoin?

In Marx’ analysis, variable capital (labour power) was the key to accumulation. It was essentially, under paying labour that led to wealth.  This “labour theory of value”, popular with the era’s economists who focused on craft and agriculture,  overlooked the importance of supply and demand factors in creating profit. The arbitrage between cost of supply and purchasing power of demand is a key element in making trade profitable. It is essentially what pays the middle men that facilitate long supply chains.

Marx also undervalued the processes embodied in machines (at least as far as I’ve read though that doesn’t include Das Kapital).  The capturing of instructions to make processes more efficient, more easily transferable and scalable is a key source of value in the digital economy. To the mechanistic capture of processes by the “constant capital” of production lines we add, algorithmic distillation of the designs and instructions themselves.

He also had little to say about the corollary of Cultural capital, the increase in the knowledge base of a society, the great circus act of giants that occurs across centuries due to recorded language.

All of these sources of value in society, arbitrage, algorithms and culture have an informational element. The degree to which markets, through price discovery, are sources of value in themselves was largely dismissed as a grubby side -effect that could be eliminated through rational planning. While this might be true now, with data points for every consumer, the internet of things and the advent of parallel computing, it was a bit impracticable in 1871.

Instead Hayek saw markets as distributed information processing systems in themselves. The extended free market and the relations between actors being the best way to ultimately determine the allocation of resources across a social economy. And from this we see why monopoly platforms, which have the greatest volume of market information, are best able to compete through price.  They can identify price points, customers and USPs of competitors and loss lead them out of business, just as Starbucks, Airbnb, Uber and others have been shown to do.

The basic aim of many large scale businesses is demand aggregation and then conversion into an infrastructure like utility that produces consistent rentier type income.

So would Marx have bought into this change? He was largely a journalist and didn’t own factories like Engels. But he would almost certainly have perceived that in Bitcoin, the basics of a currency system outside nation-state capitalism have been created. It is a working store of value, means of exchange and unit of account. It is created entirely through networked software. It has spent a decade being dismissed as a gimmick by the forces of capital and in that time has become a perfect dialectical force in the displacement of capital.

Bitcoin, now established, creates a “capital like instrument” that facilitates reward and provides a means of organising co-operation outside the power of both central and private banks, but the true revolutionary dialectic, is that in being an esoteric instrument of the emerging class of programmers, most of the early adopters were of this class. The priest caste, the ones who instruct machines, and interpret their output.

As the sleepy and complacent capital of global wealth turns its head towards this class and this instrument, their very attention constitutes a massive transfer of wealth from the class of capitalists to the Technorati. It is a quintessential immanent critique of the system, both reflecting and leading to its replacement.

Karl Marx may not have bought BitCoin, but I imagine he would have been paid in it by Engels.  

Visual networks and consciousness


The mathematical description of vision, and the question of emergent vision in non visual networks

A recent study, now buried in a blizzard of keywords, reported the creation of a visual neural net that was allowed to evolve to do the job of recognising images. The eventual structure was a three-layer neural net in a pattern almost identical to that found in the mammalian visual cortex.

This throws up some interesting questions. The first arises from the prospect that vision is simply a mathematical property.  The configuration of detection, feedback and representation can be mathematically described. This description will scale to any mammal eye or set of machines.

Vision, given the amount of the human brain dedicated to it (c.16%) makes up an essential element in the concept of consciousness (except for blind people).  The idea of lucid or conscious dreaming involves a visual element. When awake, but not processing visual stimulus, a person can be described as semi-conscious.

So with a mathematical description of the feedback net that gives rise to vision we have further evidence that consciousness is an emergent property of a complex system. In this case silicon wafers.

One quite delicious conjecture is what would happen if this mathematical pattern arose spontaneously or by chance in a complex system without an obvious system of light capture? If for example a telephone network, set of IP routers or even retail transactions on an electronic network, had a three layer feedback loop in this pattern; what would be experience of the mathematical pattern of visual consciousness in a non-visual system? (Nagel would suggest this is a difficult question).

Then of course we can reflect that many distributed systems do have the capacity to capture light, being any system of networked cameras, being any mobile phone provider. The speed of the feedback is critical, but we can conjecture two things. Firstly a mobile network might form the pattern for vision to spotanesously arise within it. Secondly the same configuration scaled, such that the network is larger, but the time between signals slower, might possess emergent properties similar to vision that would once again run us back into Nagel, with a typo, what is like to be a Bot?

It would be logical that it is possible for a similar sensory consciousness to vision to emerge from the same pattern of feedback at a different scale. And then of course we ask questions about the match between seismic models and human brain patterns and infer all sorts of worm cans about planetary consciousness “hi Gaia, how’s, oh, um, did I catch you at a bad eon”?

Another question that arises from the research is are we simply making God in our own image? The researchers ran the net and tweaked it in accordance with their judgement of its output, standard machine learning.

But given this is the case, should we be surprised that eventual vision network was similar to a human’s? It is obviously a vanity to consider the human visual cortex as the finest. It’s good, many creatures survive on a lot less detail and contrast, but in comparison to many animals, our vision is laughably poor. The most obvious clade in this is birds, who have an extra colour cone (and various different rods) in their eye allowing them to see the UV spectrum.

So when it comes to computer vision, it may be limiting to contemplate that decoding, or providing a meaning to light, should be modelled on human vision. It is essentially, a translation exercise for a human machine interface, but cannot be the ultimate goal of light processing by computers.

One excellent example is the capacity for a computer to construct “ghost images”. Which is the capacity to reconstruct the entire scene around a camera from the light that falls on a camera chip (CCD). It is possible, through Quantum Electro Dynamic theory, to recreate an image from stray photons striking the chip that are not directly in front of the lense[i].

So it is clear that the capacity to capture light and effectively understand space and shape can be done in ways quite different to the human brain’s method. Indeed other clades of animal have evolved the eye in different ways at different times[ii].

Given the range of electromagnetism outside the human visual spectrum is largely what we have built machines to detect Gamma, UV, Radio, Microwaves, 5G. It is natural to presume that if a pattern similar to conscious vision, or more precisely consciousness of electromagnetic input, had arisen spontaneously through a random arrangement of feedback patterns, it would most likely arise as an “image” of the non-visual spectrum.

Indeed, there are large programs devoted to rendering the “images” generated by space telescopes into a format readable by the human eye. We have built machines precisely for the purpose of electromagnetic spectrum detection, linked them together across the world and into space, and yet have barely asked Nagel’s question of them.

“Visual” consciousness of the non-visual is precisely what many signal networks were built for. It is likely that just as Descartes nailed a dog to a board on the pretence the animals had no feelings, our own early fumblings with the understanding of the assisted mind have underestimated the spontaneity of consciousness, and the degree to which intelligence that does not share our language is aware.


[i] Ghost Imaging. Shih. International Society for Optics and Photonics. 2009
https://spie.org/news/1717-ghost-imaging?SSO=1

[ii] Dawkins. Blind Watchmaker.

And if you haven’t
What is it like to be a Bat. Nagel. 1972
https://warwick.ac.uk/fac/cross_fac/iatl/study/ugmodules/humananimalstudies/lectures/32/nagel_bat.pdf

New economic shapes. Final


The basis of value will shift from capital in machines to capital in patterns. That is chemical patterns, such as materials, genes or proteins, algorithms, or patterns of instruction, particularly those that drive machines and that can replace labour, and patterns of understanding, causal relationships in data and design.

Two interesting features

The first is the reproducibility of many of these sources of value. Algorithms, designs, heuristics, simulation engines are all readily copyable and transferable digitally. This will also increasingly apply to chemical patterns. Genes, proteins and other biological molecules can be induced to grow and the likelihood is that the cost of doing so will plummet. In most cases, enclosure of these sources of value will require novel forms of security. Castle building in the post-kantian era will be an interesting exercise.

The second is the inherent invisibility of many of the sources of value. At a micro-scopic or nano-scopic level action is not directly observable. The action of cell biology is a natural function and subject to error and perturbation. Any action guided by the electro magnetic spectrum is not directly observable and of course, the black box AIs that will direct many systems are not directly understandable.

Both factors make this new productive system hard to regulate. The first makes it difficult to profit from as advantages discovered are easily reproduced at marginal cost. The second makes it hard to police as much of the workings are opaque.

Political Economy

So again we return to the question of what regulation can be built and how. The essential fly in this ointment is that the regulatory regimes are constructed geographically (in nation states) but the technologies are not inherently physically limited by space.

It may be that one nation regulates gene edited crops or the use of wifi to affect the brains of animals[i], but that its neighbour does not and the genes and the signals leak across borders.

In game theory terms this situation is conducive to a rapid break down of trust, both within and between nations.

Power, to maintain power, wherever it has accumulated through early deployment and understanding of the new technologies; is certain to use its power to restrict access and ensure unequal distribution of the benefits.

Against this we set the inherent revolutionary, evolutionary transformative forces of these new technologies and must ask, can any society afford not to fully enable their citizens under the circumstances, and if so, how long can they last in competition with a society that choses to use them in a determined and enlightened manner?

There is the possibility, through miniaturisation, the flattening of supply chains and the verticalization of manufacture; to reduce cost of living across society. The capacity to share and update design in real time, to gather real time economic data combined with widespread access together with enabling institutions would likely ratchet up productivity at a faster rate than hitherto achieved.  The capacity to run computers on biology, the capacity to enhance genes, all these are cheap and available technologies. Where the benefits flow from their use will be the basic arena of distributional justice.

Even if regulation is currently difficult, it is likely that instruments to measure and devices to police these technologies will be created and become cheaper.  It is also likely that the action of these technologies will leave a trace.  In the data that must be refined, in the algorithms and hardware that do the refining and in the traces that they leave on biology.

This provides the possibility of legislation in principle on the action of these technologies. And this legislation in principle can, in principle, form part of any machine intelligence, or at least they can be programmed with awareness of the principles.

The prospect of retroactive enforcement would at least create a disincentive towards current law breaking, even if not provide a current means of enforcement.  

Without at least a pretence towards enforcement it will be impossible to create a functioning property regime. The concepts of intellectual property and patent rights, while imperfect, do allow present private reward alongside future universal benefit. The IP in gene edits, novel proteins or organisms, novel compiling systems that can run conventional software on new substrates may all produce significant economic advantage, but will not be patentable within a system that cannot speak its name.

Without attempts at codification it will be hard to fully leverage these technologies in the service of changing the investment schedule. Capital will be wasted on snake oil and indulgences and a very few highly enabled agents will benefit.

So essentially there is political economic choice between a secretive extractive unspoken economic transformation or more widely enabled, conscious creation of a symbiosis with novel biology and intelligence. It is not clear that any society has made this step towards enlightened use of the post enlightenment as yet.


[i] Synchronised neuroscience over wifi
https://neurosciencenews.com/wireless-brain-network-19720/

Decomposition of price through data processing

Strategic pre acquisition

Hayek identified price (in a market economy) as a distributed information processing system. The price of elements in the chain adding up to produce the overall cost; and each of these elements subject to pressures reflecting factors of supply and demand.

This system has been used and abused to value goods and services for millennia. The abuses come from a litany of different structural problems, cost externalisation, immediacy of need and negotiating power, economies of scale and the monopoly tendency. The uses from a equally wide array of structural advantages, a changeable scale of value, a means of organising co-operation among long flexible chains, a testing and garbage collection system for economic innovation.

Price is an implicit information aggregation system, the question arises, what affect on the economy will the capacity to explicitly de-aggregate and process the same information have?

The capacity clearly exists, through blockchains, barcodes, RFID combining in the IoT, to track entire supply chains. This could allow the inclusion of many externalities within price. This would require enlightened legislative change, in global economy, this change might have to be agreed across many jurisdictions.

More realistically, it will create better information along supply chains, and will, tautologically, favour organisations and individuals who process this information effectively.

One application of this new suite of technologies is in the explicit decomposition of price.  That is, the capacity to analyse every element of a supply chain, in Amazon’s case, down to the number of footsteps each worker uses between a shelf and a conveyor. Traditionally these analytical processes have been used to improve industrial efficiency.

Increased computing power provides the capacity to combine supply chain production models with explicit price decomposition of other elements in the supply chain. Parallel computing, and soon plausibly, ternary parallel computing through synthetic biology, makes it possible to calculate multiple probably pathways for how the external, or uncontrolled elements may change in value over time. These can be combined into probable future price predictions.

Whilst always theoretical possible, the number of data points, and the declining cost (while Moore’s law still holds, the infrastructure that makes the power accessible is cloud computing) means that theories are testable, and testable by a far wider range of actors.

The implications for the production and capture of value in the economy are clearly seismic.

The most obvious implication is that competing futures models will try and acquire resources or companies that believe to be presently undervalued. Ever has it been thus. The differences is likely to be an ability to identify both human and material resources relevant to a production process ahead of time.

The feedback effect on price is likely to make a complex system even more complex.  Strategic competition over future inputs, largely driven by the insights of AI systems would appear to create an economy of competing AI gamblers. For doubtless a layer of financialisation will include direct and indirect bets on the profitability of competing divination systems.

There is every chance that these algorithms will be actively competing to bias the decisions of others and of course every chance one or two will emerge to dominate large swathes of the global economy as BlackRock and State Street or Amazon do today.

Of course the capital that flows on the back of the assumptions will have biasing effect on outcomes in the real world, such that with enough corporate will, defects in the assumptions of any given projections can be overcome (“throw money at the problem”). This would appear to be SoftBank’s strategy on occasion, and the basic assumption of the Silicon Valley model.

So we have a picture, not only of competing parallel prediction systems attempting to strategically capture future economic advantage, but in doing so, actively biasing those futures through their actions. As of course is implicit in the word “enterprise”.  However, the degree to which human intelligence can fathom these chess moves will be an interesting dimension.

Another obvious application will be in providing the processing power for economic management of a planned economy, with the nation treated like an uber-firm.

If price can be rationally decomposed into its elements, then economic direction can take place without exchange being facilitated by currency. A data driven understanding of productivity and efficiency can be used to direct resource flow. This is unlikely to happen in a market economy but seems a likely direction for Communist states.

A predictive approach likely lends itself to the production of modular components with flexibility. Smart materials that can be assembled into different forms closer to the point of use. This will involve innovative inputs to existing 3D printers and biological molecular components for RNA printers. Thus building value early in manufacturing chain. The share of the economy taken by “pre-manufacture” component makers should expand and the sophistication of the material science means they are likely to have pricing power.    

So we will move into a world where market prices can be explicitly decomposed and this is likely to increase volatility in prices as competing models seek arbitrage and by their very action change the price equation.

There is of course the promise of a more rational economic management of resources, including factors that are known to be valuable but are commonly excluded due to human cognitive bias or poorly constructed property rights. Further, there is the possibility of organisation of resource distribution through explicit information management. But that would be Christmas and the turkeys that brought us this status quo have a strong franchise.

Shapes from the tools

Efficiency and complexity will be points of vulnerability, but that will not stop the manufacturing of some products or services from being complex, nor will it drive people away from the essential value of efficiency. What we see as efficient in a more complex calculation with multiple values, may of course change.

The advent of natural language processing (NLP) has enabled artificial intelligence to explore the world of human culture and learning without guidance. Computers are no longer bound simply by their original instruction set and initial data. From here on knowledge, instruction and design will be stored, processed and developed semi-independently of homo-sapien intelligence.  The more structured the language of a discipline the more amenable it is to NLP. Areas like recipes, law and technical vocabulary are likely to be the easiest to adopt by NLP systems.

This codification of instructions, of craft and practices has already allowed machines to outperform humans in basic motor skills and the co-ordination of technical documents. Whether a jury is ready to listen to the arguments of a robot lawyer without discrimination is another thing. Software artists, authors and musicians are emerging. There is barely a physicist left in the world who’s research is not mediated by a computer[i].

This trend will greatly increase the rate of knowledge production, and should in theory have an even greater impact on dissemination of design and best practice.

This ability to transfer instructions increases the capacity and efficiency of automated algorithmic production. The pressure on logistics, demand for personalisation and increasing minaturisation points to a greater modularisation of the fabrication machines as well as end products.  One way to compete with large manufacturers with a high level of invested capital is to sell small versions of the machines used direct to the consumer. An example can be seen in the way Cinema moved to TV, and then mobile phones united both the production and consumption equipment in one device.

Manufacturers, large and small already offer customisations of products before purchase over the web. In these systems, the consumer is bought closer to the factory through information sharing and individualised delivery, rather than simple mass production. Retail becomes replaced by logistics.

This again produces a neural like configuration in the economy, with nodes of production and consumption dependent on communication and feedback between the elements. The only thing that prevents monopoly domination by the most efficiently tooled manufacturing centres are logistics issues.

This suggests that the degree to which local or domestic fabrication can replace logistics will in a large part be a principle question of political economy, such are the subsidies, explicit and implicit to big business across the world. Germany stands out as nation with an effective insitutional capacity to build and suppor SMEs and a number of countries offer support to small farmers. But by and large existing economic infrastructure is geared towards suppor for mass production and centralised distribution.

If the capacity to localise production is applied in an economy the likelihood is that some successor of the 3D printers, fab labs, or even RNA printers will all bring manufacturing capacity into our homes. This will be predicated on the production of smart materials with the flexibility to be used in many applications.  

More complex production by end consumers will doubtless add to the complexity of waste collection and pollution (e.g. the printing of disposable tooth brushes, let alone mutagenic viruses in the rivers).

It may also be the case that if the focus is on biological of pseudo-biological manufacture (mushroom houses, algaes, bacterial assemblages) that there is an advantage in localised production.

Food processing is an area which already has wide range of domestic devices. It may even be possible, as with the mobile phone, to localise production and processing in one small unit. This might be relevant to artificial meats or bacterial feedstocks.

Another area where minaturisation combines with information transfer to distinct advantage would be in personalised medicine, the precise composition of which would in many cases be specific to variables within the individual’s body including time of day and what they last ate.

The reduced capital cost of machinery may produce a trend in some sectors to return to vertical integration of supply chains (raw material, manufacturing and sale managed in one organisation). It may evenly allow primary producers to add value early in the supply chain and go direct to consumers. We may see a new “artisan” movement that becomes an exercise in sharing instructions for “artisan machines”.   

In the world of Financial data gathering, electronic currency and parallel computing the capacity should in theory exist to allow the movement of every penny in the economy to be charted. Predictive spending and credit could lead to completely different economic pattern with competition between players pushed several steps into the future.

Against this illusion of near certainty in currency flows we have to set the prospective of rapid and ongoing disruption. That is, the pattern of resource use and distribution is changed significantly by a new market entrant. Disruption by technology and fast moving technology companies has become an industry gold standard.  The launch of mass market company backed by good code and massive advertising spend is what most “unicorns” seek.

Disruption will arise not only from innovation, it appears to be principle emerging area of soft competition between multi-national actors, the closure of borders, denial of services, disruption of logistics, often in a way that has been calculated to inflict a “valuable” disruption to the other party and presumably, often planned with the aid of strategic AI looking several steps into the future. The very ability to forecast brings with it the ability to infer causality in the pattern and thereby change it. So while every data point in the economy may be trackable, black swans will appear and the river may burst its banks or meet unexpected dams from a fallen tree or crashed truck.

There may also be disruptions through climate change (droughts with knock on effects), seismic events and social collapse likely to follow from ecological constraint.

Many of these trends point to arrangements where manufacturing will be more distributed, leading to processed commodities and smart materials to have more small customers rather than delivering to large centres. Or more probably, small well tooled producers and consumers meshed together with large centres of primary production and mass manufacture.

Given that economies of scale are a simple fact of maths, as is efficiency, these values will always be pursued almost by definition, by large organisations. In this multi-polar world the size and efficiency of an organisation becomes a point of vulnerability, most multi-national actors will need a State sponsor to protect them in a world where their competitors are increasingly taking on the appearance of instruments of foreign policy and resource acquisition for the host nation.  This was to some extent, always the case with colonial enterprises, and a parallel can be seen the travails of national champions during the era of great power competition in the second half of the 19th century.  

This dynamic itself suggests that the political economy that arises will necessarily be communist, corporatist or oligarchic, with the government needing business to acquire and distribute resource and large businesses needing government support to do so in the face of competition both at home and overseas.

And all the while in a political economy of rapid innovation, politico-economic powers will be looking for disruptors, those people who can efficiently apply a new idea or technology such that it can significantly change resource flow and market share. This means helping companies to scale novel technology rapidly, which itself becomes perhaps an economic disruption in the re routing of materials and a supply crunch caused by novel demand, or worse, an existential dilemma.

To scale rapidly to global production ignores the precautionary principle, and risks releasing a new uranium or thalidomide. To not scale a solution rapidly in a time of ecological crisis provides crisis through omission rather than action. Ideally, real time experiments in the economy should be monitored in real time, and if we are honest in the research base, seemingly far off, downstream effects of the change might be brought to light in simulations that can investigate everything from impact on local environment to chemical and metabolic change overtime into the future. An obvious case of this can be found in Climate science.  
 


[i] There is the problem in a post Kantian world where we look out from instruments built by giants that if any of the underlying theory on which our instruments are dependent is wrong we can only built a grand edifice of nonsense. And this is problem that will in future lead us to the question of trust in AI. For if AI is bringing us both the theory and the data, if it is involved in our very perception and construction of the universe, then we have to trust that it is a competent an honest guide. What if the guiding AI of a branch of Science turns out to be the silicon equivalent of a giggling fraudster?

What are the tools of the new economy?

The coming century promises a different productive, economic and ecological pattern. Use of data, synthetic biology, neuroscience, sub-quantum physics, nano-assembly and parrallel computing will be key fields with transformative effects.

Data

We are all familiar with data gathering, cookies in the cache, and the even more useful explosion in data points that ubiquitous mobile phone ownership has created. Software that claims to deduce all sorts; such from mood through the movement, comes preloaded as standard on smart phones.  

Data on consumer choice are sold by Apps and websites that are dependent on advertisers for their revenue.  This data is then refined by machine learning and used to place ads in the app or site, the basic post-modern “free” digital service business model.

This is the tip of the iceberg as far as the use of data to organise an economy, centralised or decentralised, is concerned.  We can follow the life of product from cradle to grave (or cradle to cradle in the parlance of green product design). It may be across a fragmented web that the data on a person could be compiled from cradle to grave.

Groups of webusers have decided that particularly algorithms are safe stores of value, creating non state currencies, Bitcoin and its dynastic progeny. We are a step (somewhat uncertain one) from using machine learning and data organisation alone to manage the products of society’s wealth.

Synthetic Biology

A collapse of microscope size and price reduced the cost of research into DNA, basic chemistry and the brain, leading to a proliferation of projects that have transformed our knowledge base.

Though the basis of genetics was discovered seventy years ago, recent advances have shown how the chemicals in DNA translate into the proteins in our bodies. How changes in diet, environment and experience change our genetics output through methylation and phosphorylation and how viruses and CRISPr can be used to edit, annotate and add genetic material to existing genomes; and explore the brain.  The race to develop vaccines in response to the pandemic is ushering in an era of RNA printing and from that follows customisable protein production.

The Venter Institute’s minimal genome is essentially a chassis for the manufacture of microbial life. A boom similar to that experienced on discovery of the means to control infectious disease and the green revolution combined is possible.

Quantum and sub quantum physics

What advances will flow from the enhanced understanding of gravity bought about by instruments such as LIGO or the super-cool observatory in Antarctica I do not know. Nor what from the advance in sub quantum particle detection, the discovery of the error in the Muon’s charge,  the quest to detect  the massless Majorana fermion, the concept of a continuous field or vibration of bosons, rather than their existence as discrete particles such as protons; and not least the ability to control individual photons.

All these advances suggest a great leap forward in telecommunications, control over action and protein function of a populations of cells, and a revolution in phone billing and energy measurement.

This new understanding will combine with the technological advances above in ways that beyond my technical or imaginative understanding.

Neuroscience

We increasingly understand is that information exchange occurs in all neurons and not just neurons in the brain. The interest in the peripheral nervous system and the memory and learning of the immune system bought about by the pandemic is likely to be the bedrock of a new breed of technologies.

The dendritic information model, a neural net design based on the brain, promises a revolution in human intervention; decision making, memory, current available knowledge, perception, attention and even motor control can be altered, comparable to our ability to alter the basic chemistry of life.  It also provides a useful architecture for computers. This will be a key transformation of the Post-Kantian mind.

Whether implementation of these understandings leads to a revolution in education or a distracting spiral into dementia will be a key determinant of the social progress of different populations in the coming century.

Self-Assembly – nano-fabrication

Along with 3D printers, the construction of very small components can now be performed through chemistry, lasers, and genetic programming. The use of specialist materials, charged particles and die casting on laser etched substrates allows construction allows for construction of micorchips 3 nm or 3 billionth of metre, though only currently in Taiwan.

Traditional nano-fabrication factories cost billions, however the use of biological components in self-assembly promises cheap construction of all manner of micro-electric and biologically active devices. Lovely[i].

Parallel computing

One of the great recent advances in computer architecture has been the construction of neural nets.  Of particular note is Demis Hasabis’ construction of neural nets exclusively through software, recreating elements of the architecture of brain solely through code. This makes the understandings more cheaply transferable than those developed with hardware neural nets.

Looking forward, the architecture of parallel computing provides a new and more powerful substrate on which this software can be run. One advantage of note that parallel has over conventional computing is the ability to calculate alternative paths simultaneously.

Whether done through solid state quantum computers, such as Google’s Sycamore, photon computers, such as China’s in Lu Xiang; or biological computers that grow parallel branches, as first described by Turing, these machines provide the possibility to rapidly calculate non-linear equations.

This is particularly useful for a category of problems including logistics, weather, evolution, chemistry, seismic and indeed the very operation of the human brain.  With large data sets, neural nets have the capacity to recognise patterns that are too complex for the human mind to understand analytically without assistance.

What is also clear is that this new computer architecture accelerates innovation and competition in the preceding spheres, data, biology, “new physics”, neuroscience and nano-fabrication, and indeed, reflexively, artificial intelligence itself.

The economy that emerges has already ripped through the velum bounds of law and stands as a cross border wrestle of super-sized organisations and nimble, rapidly innovating smaller groups.

As said, the emergent behaviour of the economy, the economic pattern that is created by these new forces, may be as different as water is to steam. That the future is unevenly distributed is clear. We face an economy shaped not just by invisible hands, but by invisible codes, chemical and algorithmic. The impact on the economic systems of the preceding century will doubtless be transformative. What forms of social arrangement, echoed or novel emerge, we cannot know, but must try to deduce their rising contours through the mist.


[i] Lovely 2011 “Tunable metallic-like conductivity in microbial nanowire networks”. Nature Nanotechnology.

What will the post-enlightenment economy look like?

What is the new political-economy of this century to be? One does not have to be of rigidly historical materialist perspective to perceive that different ages have been governed by different patterns of political economy. The political economy of the Bronze Age was different from that of the Iron Age. If only for the fact that swords needed sharpening more often, putting greater cost on armies moving from their home territory, necessitating ploughs made from wood and restricting long distance communication other than by sea.

The Iron Age bought in horse shoes and blades that kept their edge longer (the radical pacifist use of such blades was in carpentry, which may be a lost metaphor in the life of Jesus). Political economy was for millennia, restricted to a relatively anarchic pattern of warfare and freedom.

An understanding of electromagnetism, microbial life and the use of the stored energy of fossil fuels changed the pattern of human organisation. Now we have a diverse array of revolutionary tools bought about by a deeper ability to harness the power of semi-conductors and genetics. What are the likely emergent patterns, one might ask?

After 1989 and the collapse of the Soviet economic model it seemed that the world would be united under one approximately equivalent legal framework based on European legal models, with the idea of private property central to economic development. Fast forward a few decades and we see Francis Fukuyama scribbling away to remove the egg from his face.

For rather than a triumph of the liberal order across the world we face forms of divergent organisation of political economy. In China, the liberal idea of individual human rights makes little inroads into a fusion of traditional Confucian values combined with novel interpretations of Marxist-Communist thought, a mere sprinkling of market dynamics exists within a framework of limited property rights and competing state sponsored actors.

Across much of the West, the liberal order is subverted by a hyper-individualism and international economic order that does much to evade and subvert national laws, States and indeed large corporations. A succession of billion dollar scandals flashes across the pages of news outlets from month to month. 1MDB, Greensill, various Laundromats, Wirecard, Danske, etc….The level of routinised tax evasion in invoice mispricing and offshoring is a not insignificant percentage of global GDP (4-7% of GDP across Africa as much as 9% of the GDP of Mozambique ) , such that we observe not so much the Liberal Order as an illiberal disorder in political economy.

With rapid electronic transfer of unlimited amounts of cash, the situation is more than a little comparable to Atlantic piracy of the sixteenth and seventeenth century. Large cargoes of capital seized on the high seas by actors that were often impossible to identify. The question of political economy then becomes, is the law enforceable? If not does it need to change, and if it does, can it be changed and what will the new law be? The current competing models would seem headed towards a techno-authoritarianism or an anarcho-capitalism. Both characterised by extreme levels of economic and political inequality.

Clearly the current trajectory of both systems has bought us to an ecological crisis. The question then becomes what could change this. One historic example is the compensation of slave owners after the abolition of slavery in Great Britain. Owners of fossil fuels could be compensated; the bond holders and shareholders, of the supply chain.

Compensation for the abolition of slavery was possibly the largest ever transfer of state wealth to private individuals. If we assume broadly that the people who would need to be compensated are already the richest on the planet, then we face the unfortunate conundrum that 1,300 people are said to account for 93% of the world’s wealth and as result, what do you compensate them with?

The most obvious alternative is discipline or punishment, but only China seems willing or able to do so to its nationals. Capitol Hill’s interrogation of the Americas tech giants being something of a wash out with almost no material action since. Similar might be said about the response to companies involved in some of the more outlandish episodes of the financial crises of 2007 in all countries bar Iceland.

The Sackler family may have suffered for the sins of Pharma. But there has been no whiff of a Senate hearing, let alone legislative action on the other great industrial funder of the US lobbey and prime suspect for the ecological crisis, Oil. With wealth comes power, and how do you exert power over those with power save through law, which as mentioned above, is notable for its patchy enforcement against national champions.

Looking out on this, it is hard to see how a market society can change its underlying incentive structure to redirect resource use into less destructive and extractive patterns. Doing so would mean the rearrangement of profit rights, which would face a blizzard of pressure while the planet burned. Centrally directed resource use is more likely to achieve clear goals, as in war, and so it has proved with Chinese capacity to install more wind power in a year than the rest of the world had ever managed. But just as in a market society, that a centrally directed economy can, does not mean it will.

So it would seem that we should expect to move into a new productive system created by the coming of age of variety of technologies, not least, artificial intelligence, in an environment where the legal framework is too slow to adapt to the range of economic opportunities. This might be seen as a complex system changed on some crucial parameter such that the behaviour that emerges is as different as water is to steam.

So let us look at these new tools and guess as to the shape of political economy that will grow out of them in the medium term, assuming no radical departure from the current legal framework.

A question of power

If we assume power to be a liquid like substance we might model a social physics of power.  The liquid would probably be fairly viscous under most parameters. Gravity would have a limited role, at least in metaphor, for power in known to accumulate in high places. Perhaps we could see the liquid as bearing an inverse gravity within social hierarchies. These great pumps of power.

One of the features of capitalism in relation to power is that it allows extra-state accumulation of power. Power is not dependent on land, or favour of the King (though historically that has helped in many times and places) but on accumulation of capital, which can then be used to hire, buy, lease people, machines, land. It is the means by which things are produced. It allows the organisation of co-operation, at least in theory, and imperfectly in practice.

So capitalism, and to some extent representative electoral democracy, both structure power in a way to disperse it. The liquid is not bottled, but floats around, atop, society. The only problem then being, what happens when agents acquire the capacity to keep a tighter bottle on power than the State. Or accumulate power faster than the State itself?

This is clearly the case with some multi-national corporations, particularly in relation to vast majority of middle to low income countries. It is also so the function of networks of smaller nimble specialised companies, in tech, advertising, finance, as is evidenced by the complex web of influence spun by (the now defunct) Bell Pottinger and their clients in South Africa. The capacity to intervene in the political economy of a nation was outsourced like everything else, the skills in many cases sit outside traditional bureaucratic halls of power. The liquid is well bottled, and densely held in small kegs.

The question arises, is this an inevitably of the social physics of power in an economy of rapid innovation?

For if societies are structured as tyrannies, then the focus is to concentrate power in the centre. The question then arises, can this be done effectively in a climate of rapid, competitive innovation. Can a centralised tyranny, capitalise, so to speak, on the opportunities as they evolve thick and fast (If we capitalise on opportunity under capitalism, do we tyrannise on opportunities in tyrannies and socialise on opportunities under socialism?)

Will power naturally arise on unforeseeable areas of the cutting edge, with the implications only slowly grasped? And does this transform the social physics of power beyond the means of traditional governance to control. There would appear to evidence to favour this view.

The difficulty being, if neither concentration nor dispersal of power in a social structure can prevent tyrannical accumulation, which is then itself rapidly subject to challenge, then are we to live in a sea full of storms and peaks arbitrary assaults of innovation? Will a predictive AI be able to foresee the unforeseeable at the cutting edge, and then which AI saw it first, and was it too educated on Machiavelli, Sun Tzu and Nietzche. Would they appreciate Nash better humans, as Chimps do, and see that while competing groups of humans set them up in competition the maths sometimes points to a different solution?

Societies built on bronze were slow to change, the horse lasted millennia as a source of power. But there came the slide rule and microscope, and pretty soon we found innovation snowballing across a closed global ecumen. Combustion, plastic, electromagnetics are but a century in harness. Semi-conductors, genetics half that. We have reached a Post-Kantian era, the architecture and input of our mind interpreted as much  through external computers and their findings as our sense. And it is these systems of learning, the weather satellites and spectrometers that tell us of the crisis this acceleration has bought.

So the question returns, is it possible to structure power so that these systems bring a great utility, and if so, what transformation is necessary in light of the possibilities and the danger?  We might with protein technology change both human age limits and the carrying capacity of the planet. The reality of innovation now is that any new device or material than succeeds in a global economy constitutes a real time planetary experiment

So given that the question of the physics of rapidly evolving power is well past my capacity to assess, but on assessment indicates repeat tectonic upheaval, let us turn away from crisis to the question of if not utopian, then at lest optimistic, dare I say, enlightened rational conjecture about the possibilities of a new political economy.

Speech – Freedom and harm

Reducing the freedom of speech by increasing the harm of speech.

The topic of freedom of speech has come back to the fore with discussion over deplatforming speakers from campuses, TV stations and platform broadcasting websites.  Hate speech and racism have been common points of contention in the debate.

Freedom is most commonly defined as that range of actions that do no harm to others. In this way, all can enjoy the same degree of freedom, without anyone’s actions infringing on another in a way that leaves them unfree.

The common libertarian convention is to say “sticks and stones may break my bones but words will never hurt me”. Which is to say that speech acts are not in themselves intrinsically harmful. 

The common response is point to the downstream effects of hate speech. The normalisation of insult and the degradation of the status of the target groups. This is the “slippery slope argument”, that public discourse effects social attitudes and that discrimination can boil over into scapegoating, violence and pogroms.  The lessons of history are clear enough on that.

Essentially the libertarian argument aims to open up the arena of speech acts to include acts that are harmful to others, in a similar way to its rejection of all restrictions of individual thought and action.

Free speech in a post-modern World.

One way that the arena of free speech has been closed down is through the addition of greater sensitivity of particular audiences.  Which is to say, that through sensitive recognition of downstream or non-linear casual effects, certain subjects are ruled out as harmful to public discourse.

This standard of speech may soon be enforced by means other than a sense of social approbation, either by law or through the technologies referred in earlier posts, such that some topics are recognised as intrinsically harmful.

Beyond this, we can expect states of a totalitarian or authoritarian bent, or simply cliques within states bent on power to adopt technologies to circumscribe the extent of non-harmful speech. Such that certain areas or topics are made intrinsically harmful to raise.

This is a reduction of the sphere of possible free-speech by increasing that range of speech that brings harm. So that the area of “freedom” itself is shrunk. Leaving speech in theory free, but the harm of speech increased.

This is a social, cultural and technological challenge. If the technology to enforce speech acts and the platforms on which the acts are broadcast are removed from the area of free discussion this can only exacerbate the problem.

The proscription of speech

For a long time, areas of the left leaning liberalism in the US and around the world have held certain topics as off limits. Far from erasing the ideas from the public mind, they have created a sense of frustration and repression among many. The topics deemed off limits, such as racism, have increased their hold on culture and public discourse rather than ebbed away.

The ability of those groups seeking to address these topics is limited by the shrinking of the acceptable range of vocabulary and ideas that can be used in discussing a topic. This leaves those audiences and communities under a self-imposed Orwellianist omerta, which does little to help address the clarity of thought on the matter.

We are now faced with the imposition of such restrictions through surveillance technology and approbation via social media. The question then becomes harm to who? How sensitive must we be and what are we expected to know about the listener before we speak?  Can we as a society address the pressing problems that we face as nations, groups and as a species if the area of harmful speech is extended further?

In a world of ecological fragility, heightened competition, the dissolution of borders and the blending of cultures, a world which is advancing more rapidly than the capacity of any individual human mind to comprehend; it is essential that we as intelligent societies are aware of the both the causes and the solutions available to confront these challenges. 

While the sensitivity that attention to the listener breeds must be a welcome development, to subscribe the area of free speech, by increasing the harm of that speech causes is a threat to the ability of societies to evolve and adapt to changes.

In particular it removes the essential error correction mechanism that is political and social dissent and leaves power to ossify into a status quo.  An ossified power, bent on protecting its own proscribed lexicon seems unlikely to bring the greatest benefit to the greatest number with the tools available.