Anthropic Capitalism And The New Gimmick Economy

Consider a thought experiment: If market capitalism was the brief product of happy coincidences confined in space and time to the developed world of the 19th-20th centuries, (but that were no longer true under 21st century technology) what would our world look like if there were no system to take its place? I have been reluctantly forced to the conclusion that if technology had killed capitalism, economic news would be indistinguishable from today’s feed.

Economic theory, like the physics on which it is based, is in essence an extended exercise in perturbation theory. Solvable and simplified frictionless markets are populated by rational agents which are then all subjected to perturbations in an effort to recover economic realism. Thus while it is false that economists believe idealized models to be exactly accurate as outsiders contend, it is fair to say that they implicitly assume deviations from the ideal are manageably small. Let us list a few such heuristics that may have recently been approximately accurate, but which are not enforced by any known law:

  • Wages set to the marginal product of labor are roughly equal to the need to consume at a societally acceptable level.
  • Price is nearly equal to value except in rare edge cases of market failure.
  • Prices and outputs fluctuate coherently so that it is meaningful to talk of scalar rates of inflation and growth (rather than varying field concepts like temperature or humidity).
  • Growth can be both high and stable with minimal interference by central banks.

The anthropic viewpoint on such heuristics, more common in physics than economics, would lead us to ask “Is society now focused on market capitalism because it is a fundamental theory, or because we have just lived through the era in which it was possible due to remarkable coincidences?”

To begin to see the problem, recall that in previous eras innovations created high value occupations by automating or obviating those of lower value. This led to a heuristic that those who fear innovation do so because of a failure to appreciate newer opportunities. Software, however is different in this regard and the basic issue is familiar to any programmer who has used a debugger. Computer programs, like life itself, can be decomposed into two types of components:

  1. Loops which repeat with small variations.
  2. Rube Goldberg like processes which happen once.

If you randomly pause a computer program, you will almost certainly land in the former because the repetitive elements are what gives software its power, by dominating the running time of most all programs. Unfortunately, our skilled labor and professions currently look more like the former than the latter, which puts our educational system in the crosshairs of what software does brilliantly.

In short, what today’s flexible software is threatening is to “free” us from the drudgery of all repetitive tasks rather than those of lowest value, pushing us away from expertise (A) which we know how to impart, toward ingenious Rube Goldberg like opportunities (B) unsupported by any proven educational model. This shift in emphasis from jobs to opportunities is great news for a tiny number of creatives of today, but deeply troubling for a majority who depend on stable and cyclical work to feed families. The opportunities of the future should be many and lavishly rewarded, but it is unlikely that they will ever return in the form of stable jobs.

A next problem is that software replaces physical objects by small computer files. Such files have the twin attributes of what economists call public goods:

  1. The good must be inexhaustible (my use doesn’t preclude your use or reuse).
  2. The good must be non-excludable (the existence of the good means that everyone can benefit from it even if they do not pay for it).

Even die-hard proponents of market capitalism will cede that this public sector represents “market failure” where price and value become disconnected. Why should one elect to pay for an army when he will equally benefit from free riding on the payments of others? Thus in a traditional market economy, payment must be secured by threat of force in the form of compulsory taxes.

So long as public goods make up a minority of a market economy, taxes on non-public goods can be used to pay for the exception where price and value gap. But in the modern era, things made of atoms (e.g. vinyl albums) are being replaced by things made of bits (e.g. MP3 files). While 3D printing is still immature, it vividly showcases how the plans for an object will allow us to disintermediate its manufacturer. Hence, the previous edge case of market failure should be expected to claim an increasingly dominant share of the pie.

Assuming that a suite of such anthropic arguments can be made rigorous, what will this mean? In the first place, we should expect that because there is as yet no known alternative to market capitalism, central banks and government agencies publishing official statistics will be under increased pressure to keep up the illusion that market capitalism is recovering by manipulating whatever dials can be turned by law or fiat, giving birth to an interim “gimmick economy”.

If you look at your news feed, you will notice that the economic news already no longer makes much sense in traditional terms. We have strong growth without wage increases. Using Orwellian terms like “Quantitative Easing” or “Troubled Asset Relief”, central banks print money and transfer wealth to avoid the market’s verdict. Advertising and privacy transfer (rather than user fees) have become the business model of last resort for the Internet corporate giants. Highly trained doctors squeezed between expert systems and no-frills providers are moving from secure professionals towards service sector-workers.

Capitalism and Communism which briefly resembled victor and vanquished, increasingly look more like Thelma and Louise; a tragic couple sent over the edge by forces beyond their control. What comes next is anyone’s guess and the world hangs in the balance. 

This essay was originally published in 2016 at


M-theory / String Theory is the Only Game in Town

If one views science as an economist, it would stand to reason that the scientific theory that should be first retired would be the one that offers the greatest opportunity for arbitrage in the market place of ideas. Thus it is not sufficient to look for ideas which are merely wrong, as we should instead look for troubled scientific ideas that block progress by inspiring zeal, devotion, and what biologists politely term ‘interference competition’ all out of proportion to their history of achievement. Here it is hard to find a better candidate for an intellectual bubble than that which has formed around the quest for a consistent theory of everything physical, reinterpreted as if it were synonymous with ‘quantum gravity’. If nature were trying to send a polite message that there is other preliminary work to be done first before we quantize gravity, it is hard to see how she could send a clearer message than dashing the Nobel dreams for two successive generations of Bohr’s brilliant descendants.

To recall, modern physics rests on a stool with three classical geometric legs first fashioned individually by Einstein, Maxwell, and Dirac. The last two of those legs can be together retrofitted to a quantum theory of force and matter known as the ‘standard model’, while the first stubbornly resists any such attempt at an upgrade, rendering the semi-quantum stool unstable and useless. It is from this that the children of Bohr have derived the need to convert the children of Einstein to the quantum religion at all costs so that the stool can balance.

But, to be fair to those who insist that Einstein must be now made to bow to Bohr, the most strident of those enthusiasts have offered a fair challenge. Quantum exceptionalists claim, despite an unparalleled history of non-success, that string theory (now rebranded as M-theory for matrix, magic or membrane) remains literally ‘the only game in town’ because fundamental physics has gotten so hard that no one can think of a credible alternate unification program.  If we are to dispel this as a canard, we must make a good faith effort to answer the challenge by providing interesting alternatives, lest we be left with nothing at all.

My reason for believing that there is a better route to the truth is that we have, out of what seems to be misplaced love for our beloved Einstein, been too reverential to the exact form of general relativity. For example, if before retrofitting we look closely at the curvature and geometry of the legs, we can see something striking, in that they are subtly incompatible at a classical geometric level before any notion of a quantum is introduced. Einstein’s leg seems the sparest and sturdiest as it clearly shows the attention to function found in the school of ‘intrinsic geometry’ founded by the German Bernhard Riemann. The Maxwell and Dirac legs are somewhat more festive and ornamented as they explore the freedom of form which is the raison d’etre for a more whimsical school of ‘auxiliary geometry’ pioneered by Alsatian Charles Ehresmann. This leads one naturally to a very different question: what if the quantum incompatibility of the existing theories is really a red herring with respect to unification and the real sticking point is a geometric conflict between the mathematicians Ehresmann and Riemann rather than an incompatibility between the physicists Einstein and Bohr? Even worse, it could be that none of the foundations are ready to be quantized. What if all three theories are subtly incomplete at a geometric level and that the quantum will follow once, and only once, all three are retired and replaced with a unified geometry?

If such an answer exists, it cannot be expected to be a generic geometric theory as all three of the existing theories are each, in some sense, the simplest possible in their respective domains. Such a unified approach might instead involve a new kind of mathematical toolkit combining elements of the two major geometric schools, which would only be relevant to physics if the observed world can be shown to be of a very particular subtype. Happily, with the discoveries of neutrino mass, non-trivial dark energy, and dark matter, the world we see looks increasingly to be of the special class that could accommodate such a hybrid theory.

One could go on in this way, but it is not the only interesting line of thinking. While, ultimately, there may be a single unified theory to summit, there are few such intellectual peaks that can only be climbed from one face. We thus need to return physics to its natural state of individualism so that independent researchers need not fear large research communities who, in the quest for mindshare and resources, would crowd out isolated rivals pursuing genuinely interesting inchoate ideas that head in new directions.  Unfortunately it is difficult to responsibly encourage theorists without independent wealth to develop truly speculative theories in a community which has come to apply artificially strict standards to new programs and voices while letting M-theory stand, year after year, for mulligan and mañana.

Established string theorists may, with a twinkle in the eye, shout, ‘predictions!’, ‘falsifiability!’ or ‘peer review!’ at younger competitors in jest. Yet potentially rival ‘infant industry’ research programs, as the saying goes, do not die in jest but in earnest. Given the history of scientific exceptionalism surrounding quantum gravity research, it is neither desirable nor necessary to retire M-theory explicitly, as it contains many fascinating ideas. Instead, one need only insist that the training wheels that were once customarily circulated to new entrants to reinvigorate the community, be transferred to emerging candidates from those who have now monopolized them for decades at a time. We can then wait at long last to see if ‘the only game in town’, when denied the luxury of special pleading by senior boosters, has the support from nature to stay upright.  

This essay was originally published in 2014 at



Over the past two decades I have been involved with the war on excellence.

I know that those few of us actively involved in the struggle are deeply worried about the epidemic of excellence precisely because excellence compels its hosts to facilitate its spread by altering their perception of its costs and benefits. Most educated people have come to revere the spending of the fabled ‘10,000 hours’ in training to become respected jacks of one trade. Large numbers of Americans push their inquisitive children away from creative play so that they can excel in their studies in hopes they will become excellent candidates for admission to a center of excellence, to join the pursuit of excellence upon graduation.

The problem with all this is that we cannot excel our way out of modern problems. Within the same century, we have unlocked the twin nuclei of both cell and atom and created the conditions for synthetic biological and even digital life with computer programs that can spawn with both descent and variation on which selection can now act. We are in genuinely novel territory which we have little reason to think we can control; only the excellent would compare these recent achievements to harmless variations on the invention of the compass or steam engine. So surviving our newfound god-like powers will require modes that lie well outside expertise, excellence, and mastery.

Going back to Sewall Wright’s theory of adaptive landscapes of fitness, we see four modes of human achievement paired with what might be considered their more familiar accompanying archetypes:

A)   Climbing—Expertise: Moving up the path of steepest ascent towards excellence for admission into a community that holds and defends a local maximum of fitness.

B)   Crossing—Genius. Crossing the ‘Adaptive Valley’ to an unknown and unoccupied even higher maximum level of fitness.

C)   Moving—Heroism. Moving ‘mountains of fitness’ for one’s group.

D)   Shaking—Rebellion. Leveling peaks and filling valleys for the purpose of changing the landscape to be more even.

The essence of genius as a modality is that it seems to reverse the logic of excellence.

The reason for this is that sometimes we must, at least initially, move away from apparent success and headlong into seeming failure to achieve outcomes few understand are even possible. This is the essence of the so-called ‘Adaptive Valley,’ which separates local hills from true summits of higher fitness. Genius, at a technical level, is the modality combining the farsightedness needed to deduce the existence of a higher peak with the character and ability to survive the punishing journey to higher ground. Needless to say, the spectacle of an individual moving against his or her expert community away from carrots and towards sticks is generally viewed as a cause for alarm independently of whether that individual is a malfunctioning fool or a genius about to invalidate community groupthink.

The heroes and rebels don’t even accept the landscape as immovable, but see dunes of fitness to be shifted by a sculpting or leveling of the landscape with an eye towards altering the fitness of chosen populations.

Of course, none of these modes is intrinsically good or bad. Successful individuals generally maintain a portfolio of such modalities dominated by a leading chosen modality. But the first mode of excellence driven expertise has, with institutional support, transformed into something unexpected like a eusocial networked super-competitor crowding out genius and heroism for institutional support within the research enterprise. A single obviously stupid idea like ‘self-regulating financial markets’ now spreads frictionlessly among fungible experts inhabiting the now interoperable centers of excellence within newspapers, government, academe, think-tanks, broadcasting and professional associations. Before long, the highest levels of government are spouting nonsense about ‘the great moderation’ in front of financial disaster.

I have searched almost the entire landscape of research and been shocked to find excellence having almost universal penetration with the possible exceptions of Silicon Valley and hedge funds. Excellence, as the cult of history’s second string, brooks no argument. As a pathogen, it spreads quickly as if a virus preferring to lie in wait on Ivy League sheepskin or other vectors of infection.

In the past, many scientists lived on or even over the edge of respectability with reputations as skirt chasing, hard drinking, bigoted, misogynistic, childish, slutty, lazy, politically treacherous, incompetent, murderous, meddlesome, monstrous and mentally unstable individuals such as von Neumann, Gamow, Shockley, Watson, Einstein, Curie, Smale, Oppenheimer, Crick, Ehrenfest, Lang, Teller and Grothendieck (respectively) who fueled such epithets with behaviors that indicated they appeared to care little for what even other scientists thought of their choices.

But such disregard, bordering on deviance and delinquency, was often outweighed by feats of genius and heroism. We have spent the last decades inhibiting such socially marginal individuals or chasing them to drop out of our research enterprise and into startups and hedge funds. As a result our universities are increasingly populated by the over-vetted specialist to become the dreaded centers of excellence that infantilize and uniformize the promising minds of greatest agency.

If there is hope to be found in this sorry state of affairs it is in the rise of an archipelago of alternative institutions alongside the assembly line of expertise. This island chain of mostly temporary gatherings has begun to tap into the need for heroism and genius. The major points of the Archipelago are heterogeneous compared to their Ivy covered counterparts and include Burning Man, Foo Camp, TED, Breakout Labs, Edge, Scifoo, Y-Combinator, the Thiel Fellowship program, INET, FQXI and Summit Series to name a few, as well as some which are even more secretive.

In the wake of the Challenger disaster, Richard Feynman was mistakenly asked to become part of the Rogers commission investigating the accident. In a moment of candor Chairman Rogers turned to Neil Armstrong in a men’s room and said “Feynman is becoming a real pain.” Such is ever the verdict pronounced by steady hands over great spirits. But the scariest part of this anecdote is not the story itself but the fact that we are, in the modern era, now so dependent on old Feynman stories having no living heroes with which to replace him: the ultimate tragic triumph of runaway excellence. 

This essay was originally published in 2013 at


Einstein’s Revenge: The New Geometric Quantum

The modern theory of the quantum has only recently come to be understood to be even more exquisitely geometric than Einstein’s General Relativity. How this realization unfolded over the last 40 years is a fascinating story that has, to the best of my knowledge, never been fully told as it is not particularly popular with some of the very people responsible for this stunning achievement.

To set the stage, recall that fundamental physics can be divided into two sectors with separate but maddeningly incompatible advantages. The gravitational force has, since Einstein’s theory of general relativity, been admired for its four dimensional geometric elegance. The quantum, on the other hand encompasses the remaining phenomena, and is lauded instead for its unparalleled precision, and infinite dimensional analytic depth.

The story of the geometric quantum begins at some point around 1973-1974, when our consensus picture of fundamental particle theory stopped advancing. This stasis, known as the ‘Standard Model’, seemed initially like little more than a temporary resting spot on the relentless path towards progress in fundamental physics, and theorists of the era wasted little time proposing new theories in the expectation that they would be quickly confirmed by experimentalists looking for novel phenomena. But that expected entry into the promised land of new physics turned into a 40-year period of half-mad tribal wandering in an arid desert, all but devoid of new phenomena.

Yet just as particle theory was failing to advance in the mid 1970s, something amazing was quietly happening over lunch at the State University of New York at Stony Brook. There, Nobel physics laureate CN Yang and geometer (and soon to billionaire) Jim Simons had started an informal seminar to understand what, if anything, modern geometry had to do with quantum field theory. The shocking discovery that emerged from these talks was that both geometers and quantum theorists had independently gotten hold of different collections of insights into a common structure that each group had independently discovered for themselves. A Rosetta stone of sorts called the Wu-Yang dictionary was quickly assembled by the physicists, and Isadore Singer of MIT took these results from Stony Brook to his collaborator Michael Atiyah in Oxford where their research with Nigel Hitchin began a geometric renaissance in physics inspired geometry that continues to this day.

While the Stony Brook history may be less discussed by some of today’s younger mathematicians and physicists, it is not a point of contention between the various members of the community. The more controversial part of this story, however, is that a hoped for golden era of theoretical physics did not emerge in the aftermath to produce a new consensus theory of elementary particles. Instead the interaction highlighted the strange idea that, just possibly, Quantum theory was actually a natural and elegant self-assembling body of pure geometry that had fallen into an abysmal state of pedagogy putting it beyond mathematical recognition. By this reasoning, the mathematical basket case of quantum field theory was able to cling to life and survive numerous near death experiences in its confrontations with mathematical rigor only because it was being underpinned by a natural infinite dimensional geometry, which is to this day still only partially understood.

In short, most physicists were trying and failing to quantize Einstein’s geometric theory of gravity because they were first meant to go in the opposite and less glamorous direction of geometrizing the quantum instead. Unfortunately for Physics, mathematicians had somewhat dropped the ball by not sufficiently developing the geometry of infinite dimensional systems (such as the Standard Model), which would have been analogous to the 4-dimensional Riemannian geometry appropriated from mathematics by Einstein.

This reversal could well be thought of as Einstein’s revenge upon the excesses of quantum triumphalism, served ice cold decades after his death: the more researchers dreamed of becoming the Nobel winning physicists to quantize gravity, the more they were rewarded only as mathematicians for what some saw as the relatively remedial task of geometrizing the quantum. The more they claimed that the ‘power and glory’ of string theory (a failed piece of 1970s sub-atomic physics which has mysteriously lingered into the 21st century) was the ‘only game in town’, the more it suggested that it was the string theory-based unification claims that, in the absence of testable predictions, were themselves sinking with a glug to the bottom of the sea.

What we learned from this episode was profound. Increasingly, the structure of Quantum Field Theory appears to be a purely mathematical input-output machine where our physical world is but one of many natural inputs that the machine is able to unpack from initial data. In much the way that a simple one-celled human embryo self-assembles into a trillion celled infant of inconceivable elegance, the humble act of putting a function (called an ‘action’ by physicists) on a space of geometric waves appears to trigger a self-assembling mathematical Rube-Goldberg process which recovers the seemingly intricate features of the formidable quantum as it inexorably unfolds. It also appears that the more geometric the input given to the machine, the more the unpacking process conspires to steer clear of the pathologies which famously afflict less grounded quantum theories. It is even conceivable that sufficiently natural geometric input could ultimately reveal the recent emphasis on ‘quantizing gravity’ as an extravagant mathematical misadventure distracting from Einstein’s dream of a unified physical field. Like genius itself, with the right natural physical input, the new geometric quantum now appears to many mathematicians and physicists to be the proverbial fire that lights itself.

Yet, if the physicists of this era failed to advance the standard model, it was only in their own terms that they went down to defeat. Just as in an earlier era in which physicists retooled to become the first generation of molecular biologists, their viewpoints came to dominate much of modern geometry in the last four decades, scoring numerous mathematical successes that will stand the tests of time. Likewise their quest to quantize gravity may well have backfired, but only in the most romantic and elegant way possible by instead geometrizing the venerable quantum as a positive externality.

But the most important lesson is that, at a minimum, Einstein’s minor dream of a world of pure geometry has largely been realized as the result of a large group effort. All known physical phenomena can now be recognized as fashioned from the pure, if still heterogeneous, marble of geometry through the efforts of a new pantheon of giants. Their achievements, while still incomplete, explain in advance of unification that the source code of the universe is overwhelmingly likely to determine a purely geometric operating system written in a uniform programming language. While that leaves Einstein’s greater quest for the unifying physics unfinished, and the marble something of a disappointing patchwork of motley colors, it suggests that the leaders during the years of the Standard Model stasis have put this period to good use for the benefit of those who hope to follow. 

This essay was originally published in 2012 at



The sophisticated “scientific concept” with the greatest potential to enhance human understanding may be argued to come not from the halls of academe, but rather from the unlikely research environment of professional wrestling.

Evolutionary biologists Richard Alexander and Robert Trivers have recently emphasized that it is deception rather than information that often plays the decisive role in systems of selective pressures. Yet most of our thinking continues to treat deception as something of a perturbation on the exchange of pure information, leaving us unprepared to contemplate a world in which fakery may reliably crowd out the genuine. In particular, humanity’s future selective pressures appear likely to remain tied to economic theory which currently uses as its central construct a market model based on assumptions of perfect information.

If we are to take selection more seriously within humans, we may fairly ask what rigorous system would be capable of tying together an altered reality of layered falsehoods in which absolutely nothing can be assumed to be as it appears. Such a system, in continuous development for more than a century, is known to exist and now supports an intricate multi-billion dollar business empire of pure hokum. It is known to wrestling’s insiders as “Kayfabe”.

Because professional wrestling is a simulated sport, all competitors who face each other in the ring are actually close collaborators who must form a closed system (called “a promotion”) sealed against outsiders. With external competitors generally excluded, antagonists are chosen from within the promotion and their ritualized battles are largely negotiated, choreographed, and rehearsed at a significantly decreased risk of injury or death. With outcomes predetermined under Kayfabe, betrayal in wrestling comes not from engaging in unsportsmanlike conduct, but by the surprise appearance of actual sporting behavior. Such unwelcome sportsmanship which “breaks Kayfabe” is called “shooting” to distinguish it from the expected scripted deception called “working”.

Were Kayfabe to become part of our toolkit for the twenty-first century, we would undoubtedly have an easier time understanding a world in which investigative journalism seems to have vanished and bitter corporate rivals cooperate on everything from joint ventures to lobbying efforts. Perhaps confusing battles between “freshwater” Chicago macro economists and Ivy league “Saltwater” theorists could be best understood as happening within a single “orthodox promotion” given that both groups suffered no injury from failing (equally) to predict the recent financial crisis. The decades old battle in theoretical physics over bragging rights between the “string” and “loop” camps would seem to be an even more significant example within the hard sciences of a collaborative intra-promotion rivalry given the apparent failure of both groups to produce a quantum theory of gravity.

What makes Kayfabe remarkable is that it gives us potentially the most complete example of the general process by which a wide class of important endeavors transition from failed reality to successful fakery. While most modern sports enthusiasts are aware of wrestling’s status as a pseudo sport, what few alive today remember is that it evolved out of a failed real sport (known as “catch” wrestling) which held its last honest title match early in the 20th century. Typical matches could last hours with no satisfying action, or end suddenly with crippling injuries to a promising athlete in whom much had been invested. This highlighted the close relationship between two paradoxical risks which define the category of activity which wrestling shares with other human spheres:

• A) Occasional but Extreme Peril for the participants.

• B) General: Monotony for both audience and participants.

Kayfabrication (the process of transition from reality towards Kayfabe) arises out of attempts to deliver a dependably engaging product for a mass audience while removing the unpredictable upheavals that imperil participants. As such Kayfabrication is a dependable feature of many of our most important systems which share the above two characteristics such as war, finance, love, politics and science.

Importantly, Kayfabe also seems to have discovered the limits of how much disbelief the human mind is capable of successfully suspending before fantasy and reality become fully conflated. Wrestling’s system of lies has recently become so intricate that wrestlers have occasionally found themselves engaging in real life adultery following exactly behind the introduction of a fictitious adulterous plot twist in a Kayfabe back-story. Eventually, even Kayfabe itself became a victim of its own success as it grew to a level of deceit that could not be maintained when the wrestling world collided with outside regulators exercising oversight over major sporting events.

At the point Kayfabe was forced to own up to the fact that professional wrestling contained no sport whatsoever, it did more than avoid being regulated and taxed into oblivion. Wrestling discovered the unthinkable: its audience did not seem to require even a thin veneer of realism. Professional wrestling had come full circle to its honest origins by at last moving the responsibility for deception off of the shoulders of the performers and into the willing minds of the audience.

Kayfabe, it appears, is a dish best served client-side.

This essay was originally published in 2011 at