Wednesday, March 22, 2017

Aaronson's delusions about the universe as a simulation

Four days ago, I praised Sabine Hossenfelder's remarks about the hypothesis that our Universe is a simulation. It's rather clear that complexity theorist Scott Aaronson disagrees on some fundamental issues, as he wrote in his
Your yearly dose of is-the-universe-a-simulation,
and Aaronson is just completely wrong about all these points. Some of these two folks' views were mentioned at Gizmodo. Aaronson summarized the core of his opinion as follows:
In short: blame it for being unfalsifiable rather than for being falsified!
He claims that it's not a problem to reconcile the universe-as-a-computer with the Lorentz invariance, too. On the other hand, Hossenfelder (like your humble correspondent) emphasizes that all the predictions similar to "certain computer-like glitches, such as the failure of accuracy or continuity and deja vu cats" seem to be falsified. So at some imperfect but high confidence level, the "simulation hypothesis" has been ruled out. Aaronson doesn't like it and he's wrong.

Aaronson's thesis is a typical slogan repeated by the people who don't have any clue about physics, especially the state-of-the-art physics. It's no coincidence that this slogan sounds equivalent to Peter Woit's bastardized misinterpretation of Pauli's "not even wrong" (Pauli originally directed this phrase against David Bohm along with criticisms that were a less detailed version of my criticism of the Bohmian philosophy). Just like the postmodern, feminist, or otherwise mentally crippled philosophers, Woit, Aaronson, and probably many people believe that modern physics resembles an "anything goes" territory where all ideas are equal, all ideas are allowed, and none of them contradicts the data.

The truth isn't just different. The truth is basically the exact opposite of these physically illiterate would-be thinkers' views. We know so much about the Universe – and, even more importantly, it's so hard to reconcile the principles that we have extracted from our piecewise knowledge of the Universe – that virtually every alternative idea about the character of the fundamental laws that someone could invent may be falsified within seconds.

It's simply true that only effective quantum field theories – which only differ by some details from each other – are good enough to agree with the known species particles that may interact through interactions that at least slightly resemble the observed ones and that can move in agreement with the laws of relativity. While the effective field theories don't seem to be the "final type of theories", it's very hard to go beyond that framework and string/M-theory is the only known framework (and, I am convinced but offering no concise complete proof, the only mathematically possible framework) to go "deeper" than the effective quantum field theories while not sacrificing the vital features of Nature that quantum field theories have managed to capture.

In the blog posts about Distler and causality, I discussed one level of the amazing constraints that the known principles of physics imply in combination. If you respect both the postulates of quantum mechanics as well as those of special relativity, you can't avoid admitting that objects capable of absorbing are able to emit and vice versa, too. Antiparticles have to exist. Their interactions must be basically analytic continuations of the interactions of the original particles, as the crossing symmetry demands. All the "locally observable" quantities must be constructed as some functions or functionals of quantum fields. Those evolve according to Heisenberg equations of motion which are, in the simplest cases, the classical field equations with extra hats. These fields' commutators (or anticommutators) vanish at the spacelike separation.

The conditions that the quantum mechanical postulates are obeyed; and that the laws of relativity are respected may seem like two "independent" conditions. If your random idea can fulfill one of these conditions, it may probably fulfill both, someone might think. It's like running a marathon twice. But the reality is different. These principles, like many other pairs of principles in physics, turn out to be "almost contradictory" but not quite. There is a very narrow room for possibilities and that's where valid theories of physics may live, that's where genuine physicists are looking carefully. The window for viable theories becomes even narrower if you demand that the theory describes the phenomena of quantum gravity consistently.

Those people who say that "almost all ideas that physicists study are unfalsifiable" – and, sadly, Aaronson is one of them – just don't understand any of these things. They don't understand how these relationships between the principles and methods to test them work. A related fact is that they have no clue how to actually falsify demonstrably wrong hypotheses. They're not physicists, they aren't capable of doing anything that is actually essential for a physicist, but they feel that they can make lots of far-reaching statements about the very foundations of physics, anyway. The reason is obvious: There are just not too many people who explain why these people are worthless piles of šit, who spit on them on the street, break their mouths, and throw them from all schools and similar places where such physically illiterate people have no right to oxidize. I could easily ask: Who is the second man in the world to actually act? ;-)

While rooting for ("arguing" would be too strong a word because this verb implicitly indicates some rational activity) his totally incorrect and plain silly claims that the Lorentz invariance can't possibly be capable of ruling out hypotheses such as "the Universe is a computer", Aaronson offers us an idiosyncratic interpretation of the entropy bounds in quantum gravity:
Indeed, to whatever extent we believe the Bekenstein bound—and even more pointedly, to whatever extent we think the AdS/CFT correspondence says something about reality—we believe that in quantum gravity, any bounded physical system (with a short-wavelength cutoff, yada yada) lives in a Hilbert space of a finite number of qubits, perhaps ~1069 qubits per square meter of surface area. And as a corollary, if the cosmological constant is indeed constant (so that galaxies more than ~20 billion light years away are receding from us faster than light), then our entire observable universe can be described as a system of ~10122 qubits.
I surely believe the AdS/CFT correspondence, at least in the most widely studied vacua, and I think that most of the Bekenstein bounds, depending on the exact formulation, are mostly either provably right or morally right. But I still think that virtually all the "implications" that Aaronson "derives" from these things are just absolutely incorrect.

First of all, there exists absolutely no reason to think that the information in the Universe is combining to qubits – actual base-two units of information. That would indicate that the Hilbert spaces that describe physical systems around us fundamentally have dimensions that are powers of two. There's no positive reason to think so – and there are lots of reasons to be sure that it's wrong. If you look at large but finite dimensions that appear in realistic theories, e.g. the degeneracy of a black hole (Strominger-Vafa black hole, for example) at some level of charges and mass, you will see very complicated numbers from "number theory", not powers of two.

In fact, I don't really agree with the statement that any finite-dimensional Hilbert space may be used to exactly describe any object that actually lives in the Universe (and is non-stationary, because it "lives"). You can't ever fully decouple systems with finite-dimensional Hilbert spaces from the rest. A region formally carries the entropy \(S=A/4G\) in quantum gravity but this statement is only meaningful to the extent to which you may define the "region". The boundaries of it are made of geometry that is quantum fluctuating – like everything else – so there's no reason to think that the value of the area \(A\) can be picked totally precisely with the precision of the Planck area, basically \(G\).

So the finite dimensions of the Hilbert spaces are just effective.

Let me give you a very specific example what a finite entropy means in a situation that has been understood since the year 1900: Planck's black body radiation in a box. It has a finite entropy \(S\) – the finiteness was one reason why Planck found those "quantum" ideas in the first place (he solved the ultraviolet catastrophe, i.e. the apparently predicted infinite value of the total energy, and achieved a finite prediction from an improved theory). But that doesn't mean that the system is precisely described by a Hilbert space of dimension \(\exp(S)\).

Instead, if you describe a gas of photons – or even one photon! – in a box, you will need an infinite-dimensional Hilbert space. Even for one photon, you need the modes of the photons' momentum \(\vec k\) with arbitrarily high momenta \(\vec k\). They may perhaps be quantized in some way but the number of possibilities is still infinite. Similarly, one Fourier mode of the electromagnetic field carries an infinite-dimensional space, too. It may carry \(N=0,1,2,3,\dots\) photons (or excitations, like any quantum harmonic oscillator) inside – the occupation number takes values in an infinite set.

The entropy comes out finite not because the number of possibilities – basis of a Hilbert space – is finite but because most of the states just turn out to be very unlikely (at a finite temperature, the probability exponentially decreases with \(|\vec k|\) or \(N\) in our and similar examples) so only a finite number of low-energy states actually contribute a lot to the entropy \(S\). But infinitely many basis vectors contribute to \(S\) – it just happens that the sums of these infinite collection of terms converges. Unless you suffer from the same misunderstanding of basic calculus as Mr Zeno from ancient Greece, you know that the sum of an infinite number of positive terms is often convergent i.e. finite (I am primarily talking about Achilles and the turtle) and that's the situation that we face basically everywhere in physics.

There are just no natural finite-dimensional Hilbert spaces that would accurately describe any complex enough object moving in the Universe – and one photon and/or one Fourier mode of a quantum field is already "complex enough"! Finite-dimensional Hilbert spaces aren't usable for any realistic physics. That's just how things are. They're only useful to describe pieces of physics (that must be coupled with an infinite-dimensional rest) or as toy models or mathematical description of man-made systems deliberately invented and created in this finite-dimensional fashion in order to be "simple and controllable" in the sense of digital computation. But those are only good for applied physics, e.g. the design of computers, not for the understanding of the fundamental laws of physics.

This isn't just some random guess or a matter of someone's opinion. This is a summary of what we know about the fundamental laws of physics. Whoever disagrees with me is completely ignorant about modern physics.

So "small" objects within the Minkowski space still require us to work with infinite-dimensional Hilbert spaces and finite-dimensional spaces are only "roughly enough" when we're satisfied with some approximations. The finite dimensions are at most effective, they are never fundamental. Fundamental dimensions of Hilbert spaces for anything that may move through an infinite empty space are always infinite.

De Sitter space isn't an infinite space – and doesn't seem to be embedded within an infinite space – so it could be more likely that a finite dimension similar to \(\exp(10^{122})\) could be indeed enough to describe everything that happens in our Universe. (The number is clearly infinite for all practical applications.) But we don't actually know of such a fundamental description with a finite-dimensional Hilbert space. My adviser has been intrigued by the observation that the "quantum deformation of the de Sitter isometry group" has finite-dimensional unitary representation. I am less intrigued and my opinion is different: the fundamental description of physics in a de Sitter space must start with an infinite-dimensional Hilbert space as well – and the finite dimension must only be "effective" much like in the case of photons in a box or any other example of this kind you can think of.

In other words, if there are any well-defined laws or equations governing quantum gravity in de Sitter space, these laws will only approximately hold within a finite-dimensional Hilbert space realization, I think.

And even if the finite-dimensional Hilbert spaces were enough, a digital computer couldn't describe the probability amplitudes precisely.

These inaccuracies could always show up because in quantum mechanics, there are lots of observables that may become very important in some situation and these observables very sensitively depend on the constructive and destructive interference among a very large number of probability amplitudes (and in Nature, as I argued, infinite number of amplitudes) and those must therefore be stored very accurately. Also, if you tried a JPG-like clever or "compressed" way of storing them, you would find out that the number of "emergent features" that any pre-existing JPG-like scheme takes into account is too low relatively to the emergent features that quantum mechanics may make relevant.

Also, a whole blog post about the foundations of quantum mechanics should be inserted here, I've written many of them: the computer representation of a wave function isn't really the same thing as the wave function. The computer memory remembering the complex probability amplitudes pretends that the wave function is an objective classical field. But it's not. A wave function is just a collection of probability amplitudes – quantum complex generalizations of probability distributions where complex phases are added and all the relative ones matter. And like the probability distributions, the probability amplitudes – and the wave functions packaging them – are dependent on the choice of the observer. So "one classical simulation of the whole Universe" just contradicts the basic established facts about quantum mechanics.

But even if you were talking about a quantum computer with qubits – whose conceptual foundation is fully quantum mechanical – it won't work too well. Just realize that even a system that is as simple as one photon or one Fourier mode of the electromagnetic field has an infinite-dimensional Hilbert space and there's obviously no natural or elegant way to embed an infinite-dimensional Hilbert space to a finite one. Well, there's really no way at all ;-) but I want to say something that applies even morally. No truncation of the infinite-dimensional space is natural. There is no good rule "at what \(\vec k\) or \(N_\gamma\) you should truncate".

When a computer (or even a quantum computer) simulates a physical system, you may always see that it's not the "real thing" because the computer has to make some sacrifices due to its finiteness and there are many (ugly) ways to make these sacrifices. You may in principle find these sacrifices and imperfections and see that they could have been done in very many different ways – which proves that the computer program is just a caricature or a forgery, not the real fundamental law of physics. The real fundamental laws of physics don't admit zillions of types to make the compromises and sacrifices. Actual laws of physics as we know them are highly constrained and almost or completely unique; computer programs with approximations are messy and non-unique. We know that the world around us is vastly closer to the former.
...Anyway, this would mean that our observable universe could be simulated by a quantum computer—or even for that matter by a classical computer, to high precision, using a mere ~210^122 time steps.
But the "observation" that with more than a googolplex of steps on a computer with one googolplex of transistors costing a googolplex of dollars, one could approximate something with some good accuracy isn't any actual evidence whatsoever that this is how the Universe operates. It's just a vacuous tale. Aaronson doesn't give any evidence that someone has ever had a googolplex of dollars or transistors – and all the other prerequisites – so he is just replacing an implausible tale by another, specific one that is even more implausible. It's (even) vastly less likely because Aaronson has added additional assumptions.

After all, the human brain could sometimes act as a quantum computer – although I think it's not the case. But some unusually convincing ideas why the human brain could be a quantum computer have been proposed. If our brains act as quantum computers, they're obviously not equivalent to a (classical) Turing machine. Physics and neuroscience research is needed to settle similar questions.

The situation isn't just neutral. If one thinks as a scientist, he or she can see that the theories of Aaronson's type make a huge number of very general, "soft but almost hard", predictions that simply disagree with the observations. If one assumes a program like that, one simply has to consider all possible programs that could run on the googolplex of transistors, assign some probabilities to each of them, and determine the probabilities that some observation will end up with one outcome or another in the simulated (unnatural) "Nature". And when you do so, you will simply end up with insanely low probabilities for most observations, even the most trivial ones.

The fact that you get low probabilities by considering an ensemble of all theories that are "almost equally good" according to your paradigm – in this case the paradigm that the Universe is a simulation – is absolutely important in all of rational thinking. You could conjecture that you don't have any parents – you were randomly born as a thermal fluctuation in the center of the Oval Office and escaped the White House as a ghost, before you started to live normally and read this blog. It's possible but it's very unlikely. That's why we exclude this possibility. You may really estimate the probability and you get some tiny number. If the number is really tiny, then the theory must be considered ludicrous or wrong.

The case of "the universe as a simulation" finds itself in a situation that is almost the same as the hypothesis that you were randomly created out of thin air in the Oval Office. The probability that all the digital pieces conspire so that they resemble a beautiful spacetime according to string theory (or a QFT, if I have to write this blog post for semi-educated people as well) is simply tiny because there are so many other programs with different properties that Ms Simulator could have written. Our Universe doesn't quack, smell, and looks like the Duck game because it's probably not a Duck game.

If the Lorentz-invariant Standard Model were just "emergent" and coming from a deeper theory that also has the potential to produce Lorentz violations, you would have to ask why e.g. the 46 coefficients by Coleman and Glashow in front of the CPT-even but Lorentz-violating terms deforming the Standard Model seem to be so tiny. The probability that all of them are this tiny relatively to the "natural estimates" is super-tiny. And that should mean that you correspondingly disfavor or abandon all theories that assume that Nature fundamentally does something like that. It's exactly the same logic as why you ignored the possibility that you're a ghost from the Oval Office. The probabilities computed according to a sensible formula – almost any formula that follows from some rules you wouldn't be ashamed of calling candidate laws of Nature – are ludicrously tiny. Gell-Mann's totalitarian principle – an application of Bayesian inference – says that whatever is not prohibited, will happen. If the Lorentz violation isn't fundamentally violated, it should be seen around. But it's not. If this fact doesn't affect your thinking about the foundations of physics, then you are simply ignoring the empirical evidence. You are not doing science.

Indeed, with an intelligent designer, you may conjecture that there is a higher authority that deliberately tried to make our simulated world look like proper string theory in a smooth spacetime. But this loophole has the obvious drawbacks. It's exactly on par with religions. The Lord from the Bible has surely arranged the dinosaur bones in the way we observe them in order to have some fun, after the week-long drudgery when He created the 70 or so species. Moreover, if you seriously believe this explanation – that Ms Simulator made the job correctly – then your actual physics research should still be perfectly equivalent to the research of string theory. Once you admit that the simulated character of the "foundations of our world" is totally carefully hidden, there's no reason to study it by science because you just can't ever uncover it. You just said it. So from the scientific viewpoint, our Universe simply isn't simulated.

Again: You either admit that the apparently continuous foundations of physics hold precisely, and the science based on these assumptions will completely ignore the discrete and related effects because they're unobservable even in principle. Or you say that these things are in principle observable and your "physics" will be all about the predictions of these computer glitches. A deja vu cat should be seen here or there, an agent should suddenly clone himself, and so on. All your hypotheses will be constantly ruled out and your life will be – and the history of all your soulmates has been – one giant failure because there's clearly no sign that any of these predictions works. But you just can't eat a pie and have it, too. You can't pretend that these computer-like effects are both scientifically relevant and not lethal for your philosophy. They're either scientifically irrelevant or proving that you are wrong.
Sabine might respond that AdS/CFT and other quantum gravity ideas are mere theoretical speculations, not solid and established like special relativity.
AdS/CFT is a mathematical theorem – more or less proven at the physicist's level of rigor – about theories closely related to physical theories we need in modern physics. But it is not directly applicable to the world around us because the world around us is approximately a de Sitter space, not anti de Sitter space. So this comparison with the special theory of relativity – which is very important for everyday observations by particle physicists and others – is misleading.
But crucially, if you believe that the observable universe couldn’t be simulated by a computer even in principle—that it has no mapping to any system of bits or qubits—then at some point the speculative shoe shifts to the other foot. The question becomes: do you reject the Church-Turing Thesis?
This question places Aaronson squarely among the crackpots who constantly talk about the importance of Gödel's theorem in physics. These topics from set theory, axiomatic foundations of mathematics, and computer science just don't have and can't have any relevance for physics whatsoever. I've written blog posts explaining this point e.g. when it comes to undecidability but let's focus on the (inequivalent but coming from similar thinking) Church-Turing thesis here.

The Church-Turing thesis states that the functions computable by a human being using an algorithm (while ignoring finiteness of resources) form the same set as the set computable by a Turing machine – which is basically known to be equivalent to every ordinary computer. So thinking humans and computers may achieve the same things.

Now, Hossenfelder is being asked: Is it true? Is it false? It's much more important in this debate that this question is 100% irrelevant i.e. off-topic. Aaronson's question about the Church-Turing thesis is a question about computer science that is moreover strongly sensitive on the precise definition of the words in that sentence. But computer science may be either phrased as a branch of pure mathematics – and physics obviously respects and has to respect general facts provable by mathematics at all times – or as a branch of applied science dealing with the production of computers and programs.

In both cases, it just doesn't have anything to do whatsoever with the fundamental laws of Nature. These mathematical questions are either "much deeper" than the fundamental laws of physics – or much more shallow. Again, if you view the science about Church-Turing theses to be a branch of pure mathematics, it just restricts the field within which physicists may operate – just like 1+1=2 restricts what physicists may say. But if you don't have any explicit proof that a physical theory such as the Standard Model or string theory violates a mathematical fact and is therefore inconsistent, then you shouldn't pretend that the mathematical fact is relevant.

On the other hand, if you view computer science as a branch of applied physics, the question is obviously a derived one. You may deduce the answer by thinking mathematically and assuming lots of things about humans and computers that follow from the fundamental laws of physics such as string theory. If string theory along with mathematical theorems and computer-science-style considerations implies that the answer to Aaronson's question is Yes, then it's Yes. If it implies that it's No, it's No. But the logical relationship cannot go in the opposite direction because humans and computers are made out of elementary fields and particles or strings (or whatever elementary objects fundamentally exist in physics), not the other way around.

Once I have explained why the people interested in the Church-Turing thesis aren't doing fundamental physics, let me try to address it. I think that with a reasonable definition of the words such as "algorithm", the statement is tautologically true. A human that is following an algorithm according to the usual definitions is basically emulating a computer so of course that he has the same set of problems he may solve as a Turing machine does. That's the end of the story. A human is usually less infallible or careful than a computer and it makes some differences. Sometimes, human solutions are wrong. Humans may deliberately cheat – while computers mostly cheat because they were programmed to cheat by a human etc.

The reason why this conclusion "it's a trivial tautology" might be wrong in the real world is that humans may perhaps think in other ways that aren't well simulated by man-made computers (i.e. Turing machines). Maybe some of the human intuition uses physical processes (perhaps those similar to those on analog computers) that are not running within Turing machines. If you allow "algorithms" to include some of this thinking that humans are capable of doing but they don't even understand how they did it – and they may be incapable of imprinting this skill onto a computer – then the statement could be false, too. Humans could be capable of calculating many additional things that Turing machines can't compute.

Great. If you define the terms in that sentence – especially "the algorithm performed by a human" – carefully, you will know whether the answer may depend on the laws of physics (and biology) or whether it is purely mathematical. And once you exactly understand the question, you may try to give evidence for one answer or another or find a proof of one answer or another. But it's totally obvious that the answer to the question whether the Church-Turing thesis is right is a derived one. You can't start with an answer and then claim that it implies something about physics. It simply cannot. The question is either totally decoupled from physics if it only talks about mathematical structures that exist independently of the physical world; or it depends on the properties of the laws of physics but these laws of physics are among the "causes" while the answer to the Church-Turing question is a "consequence". It just can't be the other way around.

But it's clear what's going on here. Aaronson just isn't thinking rationally, scientifically, or impartially at all. He wants to impose one answer to the Church-Turing thesis as a dogma, with no evidence, and then he demands that the rest of science "adapts" to this answer – even though he can't logically connect it with the other questions. He just doesn't seem to have understood the concept of being open-minded about questions that aren't settled. I've noticed the same in the context of Aaronson and the \(P=NP\) dilemma. He is clearly religious about certain things and defends \(P\neq NP\) as irrationally as religious believers fight against infidels. \(P=NP\) would be the ultimate heresy leading people to the hell while one can write a tale about anything that is compatible with \(P\neq NP\) – these are basically his two main "arguments".

Sorry, science doesn't work and cannot work in this way, Scott. When questions are obviously composite or derivative and therefore "non-fundamental", they can't be used to push the discussion about the fundamental laws in one way or another.
Or, what amounts to the same thing: do you believe, like Roger Penrose, that it’s possible to build devices in nature that solve the halting problem or other uncomputable problems?
Aaronson is self-evidently fighting a straw man here. None of the things that Hossenfelder defended has anything to do with the halting problem so Aaronson has only included this sentence as an equivalent of "you must be wrong because you must also believe that 1+1=3". There is actually no indication anywhere that Hossenfelder should believe 1+1=3 – even though she does believe in many wrong things.

The halting problem is a purely mathematical exercise. The question is: Is there an algorithm HPS (halting problem solver) that may have a finite code of any computer program in a given language inserted as the input and that calculates, within a finite time, whether this program will halt or run indefinitely?

In 1936, Alan Turing carefully defined the algorithms etc. and rigorously proved that the answer is No. The proof is analogous to Gödel's proofs – or Cantor's proof of uncountability of the real numbers, for that matter. The punch line of the proof is that if a HPS existed and could spit the result in a finite time, for an input similar to itself, it would stop exactly if it wouldn't and vice versa which is a contradiction. So a HPS cannot exist. There can't be a program that solves the question "does a code run indefinitely" in full generality. So it's a mathematical theorem and the claim "HPS cannot exist" is exactly on par with 1+1=2. So why does Aaronson talk about it at all? It's just absolutely stupid. Hossenfelder hasn't made any statement about the halting problem so she just possibly couldn't have contradicted Turing's proof that HPS doesn't exist.

Aaronson's incorporation of this topic into the discussion is self-evident demagogy.
If so, how? But if not, then how exactly does the universe avoid being computational, in the broad sense of the term?
What the hell is rottening in the broad cavities of your f*cked-up skull, Aaronson? The Universe avoids being "computational" simply by carefully pursuing laws of Nature that are not computational. That's it. The laws of string theory or general relativity or the Standard Model obviously aren't inconsistent with mathematical theorems such as "HPS doesn't exist" or "1+1=2". If they were inconsistent with mathematical statements, we would say that these laws are "internally inconsistent".

Quantum field theories, to pick a specific example, are demonstrably consistent. The hardest part aren't some ludicrous "liar paradoxes" in recreational mathematics but the proof of renormalizability – one that e.g. 't Hooft and Veltman got their Nobel prize for. The proof of the consistency of a theory has nothing to do with the childish ideas that Aaronson would love to spread in physics. And physical theories we know are known to be consistent which means that they can't possibly contradict mathematical theorems such as "HPS cannot exist".

In other words, Aaronson's claim that Turing's theorem mathematically implies that all consistent laws of physics must be "computational" – the Universe has to be a computer – is self-evidently invalid. We know tons of counter-examples to this invalid proposition. In fact, all theories that physics has ever known or studied are counter-examples to Aaronson's über-idiotic proposition.

The wrongness of Aaronson's claim should be obvious even to 10-year-old schoolkids who are trying to learn physics. By the way, note the delicate phrase "in the broad sense of the term". It's very clear that Aaronson has used this mysterious phrase as a synonym of "if you abandon everything you know and all logic and pretend that you think that I am not a complete idiot". I can't imagine any other meaning of a "broad sense" in which the Standard Model would be either inconsistent with mathematical theorems or computational and resembling a computer simulation. The Standard Model is clearly neither even in the broadest sense you could think of. But even if you invented some "very broad sense" in which the Standard Model would either contradict mathematical theorems or resemble a computer, what would this "very broad sense" be good for? This "breadth" would mean nothing else than that all these claims and terms became absolutely ill-defined and worthless, along with everything you say about them.
I’d write more, but by coincidence, right now I’m at an It from Qubit meeting at Stanford, where everyone is talking about how to map quantum theories of gravity to quantum circuits acting on finite sets of qubits, and the questions in quantum circuit complexity that are thereby raised.
There may be interesting ideas of this type but such meetings carry risks. This particular meeting and some others seem to be like a deliberately engineered encounter of people from two disciplines who are told "You have been paid the air tickets and spend a lot of time here so you must prove that the disciplines are really friendly, close to each other, and the area in between is fruitful." The only problem is that this "dogma" doesn't have a reason to be right. It may be equally wrong – and most similar attempts to hybridize disciplines surely lead nowhere.

One can't remove the different status of the interesting mathematical structures that dominate physics – various continuous structures with continuous symmetries, differential equations etc. For physicists, they're what the fundamental laws are all about. For computer scientists, they're just some idealizations that may approximately coincide with some calculations they are interested in. You shouldn't mix these two fields because they really have totally incompatible assumptions about the importance of some structures.

For computer scientists, strictly continuous and smooth mathematical structures are uninteresting or unrealistic because they can't get them with their real-world computers. For physicists, strictly continuous and smooth mathematical structures are the bulk of science and they have the near monopoly to describe Nature at the fundamental level because almost all the evidence makes it clear that Nature actually runs on these structures at all times. So there is a moral contradiction. The contradiction isn't "sharp" because physicists and computer scientists are talking about different things – about Nature and computers, respectively. But that's why the mixing of their views isn't likely to be helpful.

Physicists may study tensor networks with finite numbers of observables etc. I think that all of them that have been produced so far are at most toy models created to make a point. But the truly physically relevant manifestation would have to be compatible with the existence of continuous structures, if I make this point really carefully.

What may happen is that some physicists get brainwashed by the computer scientists and they start to forget their physicists' common sense. I do think that some questions and hypotheses linking black holes and complexity were somewhat interesting. But he's at the risk of losing the physical common sense himself. Preskill – whose accidental combination of a physical and computer science background makes him a natural glue in between the communities – tweeted:

Is there some law (basically in computer science) that protects black hole horizons?

There's some sign here that Susskind could want to "assume an answer" in the same fallacious way that Aaronson often exhibits. You know, the laws of physics must be compatible with the observations and internally consistent. But do they imply that a black hole is "protected" against something? Here, one would carefully have to say what is the "threat" that the black hole horizons should be protected against. Clearly, by the protection, Susskind means some "impossibility to look inside a black hole by performing a calculation on the data measured outside". And he wants to seemingly reasonably assume that as the black hole is getting older, it's harder to "decode it" microscopically.

But you know, it's equally possible that black holes are not protected in this way and one can make a complicated correlated measurement+calculation outside the black hole whose results are bound to be equivalent to a measurement inside, right? Or after some time, something about a black hole may become "easier to calculate again" than a minute earlier. This clearly doesn't automatically mean an inconsistency of a theory. A new law for the complexity analogous to the second law for the entropy growth is a hypothesis, not a fact. Now, Susskind does ask a question so he formally admits it's a hypothesis, not a fact. But he's clearly pushing the people to pick one of the answers even though the theory hiding behind the opposite answer could exist and be equally if not more interesting.

Classically, black holes make it causally impossible to escape from a region of the spacetime but the Hawking radiation is already enough to see that this "ban" cannot be quite as absolute in the quantum theory as it was in the classical general theory of relativity. So with some big apparatus and a computer surrounding a black hole, you could perhaps effectively see inside, too. The yelling that "it's a heresy" just isn't science. You don't have a proof that this answer is impossible so you shouldn't pretend to have one.

Incidentally, I think that half a century ago, Roger Penrose was pushing the "Cosmic Censorship Hypothesis" equally dogmatically. He preached that the naked (covered by neither a horizon nor a hijab) singularities can't ever evolve in consistent laws of gravity because that would mean that classical GR becomes unpredictive when they're born and they could affect the rest of the Universe according to some unknown laws that classical GR can't capture. Well, the most direct formulations of the hypothesis – treated as a mathematical theorem within GR – were proven wrong quickly. But even very weak versions seem to be false – with some most interesting examples in higher-dimensional theories of gravity. Penrose's hypothesis seems to be even morally wrong. After all, there was never any reason to think that GR should always be enough to predict everything about black holes in the real world. Due to the singularities etc., it's really an inconsistent theory if you're stringent enough and the detailed laws of quantum gravity may matter and probably do matter.

I think it's fine that Penrose had formulated the hypothesis but I don't think it was right that for decades, people were pushed towards one of the two possible answers without any positive evidence.
It’s tremendously exciting—the mixture of attendees is among the most stimulating I’ve ever encountered, from Lenny Susskind and Don Page and Daniel Harlow to Umesh Vazirani and Dorit Aharonov and Mario Szegedy to Google’s Sergey Brin.
Social events like this one can be fun but I don't really believe that Sergey Brin, to pick the most eye-catching name here, is capable of contributing to the research of the black hole information puzzles.

Aaronson tells us that they mostly avoided the discussion of the "simulation hypothesis" there and:
All I can say with confidence is that, if our world is a simulation, then whoever is simulating it (God, or a bored teenager in the metaverse) seems to have a clear preference for the 2-norm over the 1-norm, and for the complex numbers over the reals.
Maybe you should reduce your confidence or self-confidence by some 122 orders if this is the only thing you can say about physics. It's nice that Aaronson knows that quantum mechanics uses complex numbers and I have praised him for basically understanding these matters correctly.

But the complexity of the wave function and the usage of the 2-norm (a trivial thing) are really just a tiny portion of modern physics and why they're mathematically needed within quantum or quantum-like physics are basically trivial (and the proofs have nothing to do with the psychoanalysis of any divine entity who could program our world – which is what Aaronson explicitly suggests that they should incorporate).

If the sentence above summarizes all the things you know about quantum gravity or modern physics in general, you should really shut up, Aaronson, because you're almost completely iphysicate (i.e. physically illiterate and physically innumerate). Complexity of the wave function is enough but where's your Poincaré symmetry? Locality and causality? Reversibility of the equations? Ergodic theorem? Laws of thermodynamics and statistical physics? Classical field theory? Quantum field theory? Gauge symmetries? Renormalization? Anomaly cancellation? Physics – and perhaps, shockingly enough, even Sabine Hossenfelder – knows much more things about the laws of physics than you do which is why your proclamations about "what's easy or just fine" in cutting-edge physics inevitably end up being a stream of stupidities by a layman and it's outrageous that you pretend to be more than just another iphysicate layman.

Appendix: Aaronson's Backreaction comment

Under Sabine's blog post, Aaronson wrote a comment that looks really, really dumb. It starts by the following paragraph:
The notion that we have solved problems that can't be solved by algorithms seems suspect. What counts as a "solution?" As long as it can be described and verified in finite time, it can be found (after a very long time) by a simple trial and error algorithm. And if it can't be described and verified in finite time, then how can we claim to have solved it?
Do you understand what is the silly point that this paragraph boils down to? It boils down to the fact that the two disciplines mean different things by a "solution". Aaronson could also invite a chemist who would protest against the claims that physicists or programmers have found any solutions. How could they have found solutions if they haven't bought any ethanol or water in which the solution could be made? ;-)

In physics and, more generally, in natural science, we are "solving problems" by accumulating evidence and constructing and filtering theories and their applications and proofs that give answers to individual questions or whole classes of questions. And other physicists may reproduce what the previous ones have done or rediscover it and they clearly do so in finite time.

Are physicists using "algorithms" to solve the problems in physics? What a stupid question. It depends on how you exactly define an algorithm. There is some mental activity involving some steps that may be described explicitly, some steps that are mysterious and dependent on the people's genius – like the guessing of the viable hypotheses. But you can be sure that what physicists are doing is not literally the same thing as what a programmer or his computer are doing when they're performing steps in some typical real-world algorithms. So if your definition of an algorithm or a solution is narrow and specific enough, physicists (and other natural scientists) aren't playing this game at all. Why should they? What an utterly idiotic "demand" to ask that physicists must behave like computers that run an algorithm.

Are physicists looking for answers to physics questions by a "trial or error algorithm"? Is Aaronson joking? Does he think that Newton was combining letters randomly before they crystallized to "GRAVITATIONAL FORCE DECREASES AS THE SQUARED DISTANCE", like in the infinite monkey theorem? Einstein would have had an easier job to find "E=mc2" except that with Aaronson's "trial and error method", he would have no idea what he has discovered. Is Aaronson seriously suggesting that physics or science research could get somewhere with this "trial and error method"? Haven't you forgotten your medications, Scott?

Physics and natural science obviously have nothing to do with "this kind of solving process" at all. A "solution" comprehensible to narrow-minded peabrains of Aaronson's caliber simply means one element of a predetermined set of possibilities – encoded as some data file, graph, a path of a traveling salesman in a graph, whatever. But most of the time, physics and science aren't doing anything of the sort. We haven't been given any predetermined set of all possible answers to physical or scientific questions. We haven't been given and couldn't have been given any complete list of possible theories. The task to discover QCD was nothing like the search for one of the paths out of a maze. A century earlier, the people couldn't have even imagined most of the concepts we are using and they couldn't have possibly believed that those became relevant in science. And even if we were given a list of theories, it would clearly be infinite and the theories that actually explain anything about Nature needs pages of text to be presented in any useful way. These texts obviously include the definition of new terms that are useful and that the normal people couldn't have known before they started to read.

So to solve the problems that quantum field theory solves, you really need to write some papers or textbooks on quantum field theory, after you have done or evaluated the experiments and thought carefully and successfully. Hundreds of pages that are full of letters. Do you really want to search through all sequences of letters (or words) to solve physics problems, Aaronson? Holy cow. I just find it utterly unbelievable how a human being in 2017, let alone an adult, let alone an adult in the West, let alone an American, let alone a Jewish American, let alone a Jewish American that is at MIT or UT Austin faculty, may be so absolutely disconnected from the world and ignorant about absolutely basic things e.g. what rational thinking (such as one in sciences) amounts to.

This turns Aaronson's religious belief that \(P\neq NP\) is "mandatory" into one of the most moderate examples of his irrationality. Of course, if he considers the brute-force search through all possibilities to be the "method of choice" that everyone should use whenever it may be used in principle, he must believe that all problems would be solvable whenever the brute force methods are possible in principle. But in reality, even if they're possible in principle, they're just not usable in practice and almost all the time, people use vastly more effective, different methods. A competent computer scientist must know it from his own field. There isn't any algorithm, especially trial-and-error algorithm, to write publishable papers similar to Aaronson's, is there? Why would he think that such an algorithm could work in physics which is a much more abstract field demanding bigger fantasy?

The second paragraph starts by a sentence that is no better:
Proving a problem unsolvable isn't the same thing as solving it!
Sorry if this weird sentence was weird just because of some misprints but among rational people, the former is a subset of the latter. For a physicist, proving that something is undoable, e.g. unsolvable, is an answer that may become the adopted solution to a problem. For example, we may solve the problem how to accelerate a spaceship to a speed that exceeds the speed of light. Well, the problem is unsolvable because the speed of light is the ultimate cosmic speed limit. And this fact is a solution to this question about superluminal speeds and lots of analogous questions you may pose. It's a solution because once you understand why the principle is correct, the question is settled and you no longer try to solve it.

In the same way, the Heisenberg uncertainty principle is a solution to the problem how to measure \(x\) and \(p\) of a particle simultaneously. It just cannot be done. How can you predict the place on the photographic planet where a particular photon lands? It's unsolvable. The outcomes of experiments that aren't guaranteed are unpredictable separately. The laws of Nature are intrinsically probabilistic. And this fact is a solution to all questions in physics that ask about the determinism in Nature. And I could continue for hours. The fact that Aaronson wouldn't accept these answers as "solutions" only shows that he is an idiot, not that there is something wrong about physics.

The deluded continuation of the comment mixes the halting problem solutions and non-solutions with the respect to Bostrum's and Penrose's fantasies in increasingly incoherent ways. My adrenaline level runs too high. I have read the full comment but I don't think it's safe for me to reply to all of it because the concentration of stupidity in Aaronson's comment exceeds all the legally allowed thresholds.
There are some moves that you don't ever see in a pro game because they end badly. But knowing that they end badly requires playing them out!
Right, you and Bostrum did so and you lost after the third move, checkmate. Not just that. The Bostrum's king's scrotum ended up in Aßonron's aß, on top of additional twists and turns. It was just totally hopeless and every at least infinitesimally reasonable person wants to move on.

No comments:

Post a Comment