This week’s New Scientist has an article by Jim Baggott and Daniel Cossins entitled Beyond Experiment: Why the scientific method may be old hat, which deals with the recent controversy over attempts to excuse the failure of string theory by invoking the multiverse. The article (unfortunately behind a paywall) does a good job of describing the nature of the controversy: what do you do when it becomes clear your theory can’t be tested? Do you follow the conventional scientific norms, give up on it and work on something else, or do you try and find some kind of excuse, even if it means abandoning those norms?
Much of the article deals with the issues raised at the recent Munich conference (discussed here). Two of those quoted (Dawid and Gross) are not multiverse partisans, instead argue that the motivations that got people interested in string unification more than 30 years ago are good enough to justify indefinitely pursuing the theory, no matter how bad things look for prospects of connection to experiment. On the other hand:
Their enthusiasm is far from universal, and some physicists are downright alarmed. Woit warns that the need for empirical vindication could be pushed so far into the background as to be invisible. Carlo Rovelli, a theorist at the University of Aix-Marseille in France, believes that this scenario has already come to pass. Rovelli … argues that the last thing we need is a system that legitimises failed theories. “A theory is interesting when it teaches us something new about the real world,” he says. “Not when it becomes a house of cards that delivers nothing but university positions.”
On the question of the string theory multiverse as science, those gathered at the Munich conference were pretty uniformly hostile. As a proponent of this, the article quotes only one person, who wasn’t there:
Sean Carroll, a theorist at the Caltech Institute of Technology at Pasadena and a leading advocate of the multiverse, insists that if anyone is being unscientific, it is those physicists who seek to enforce outmoded philosophical principles and impossibly high standards. “People support these theories because they offer the best chance of providing a useful account of the data we actually do collect here in our universe.”
I’m not sure how the string theory multiverse provides an account of data we have collected that is “useful”, except in the sense of “useful to those who don’t want to give up on string theory.”
Carroll has explained his views in more detail here, arguing that falsifiability is an idea that needs to be retired, to be replaced by “empiricism”. “Empiricism” seems to mean “ability to account for the data”, with “the multiverse did it” an acceptable way to account for data, even if not falsifiable. He’ll be giving a talk on this at the American Astronomical Society meeting in San Diego this summer, with abstract:
A number of theories in contemporary physics and cosmology place an emphasis on features that are hard, and arguably impossible, to test. These include the cosmological multiverse as well as some approaches to quantum gravity. Worries have been raised that these models attempt to sidestep the purportedly crucial principle of falsifiability. Proponents of these theories sometimes suggest that we are seeing a new approach to science, while opponents fear that we are abandoning science altogether. I will argue that in fact these theories are straightforwardly scientific and can be evaluated in absolutely conventional ways, based on empiricism, abduction (inference to the best explanation), and Bayesian reasoning. The integrity of science remains intact.
Carroll’s argument seems to be that the conventional understanding of how science works that we teach students and use to explain the power of science has always been wrong. Falsifiability by experiment isn’t necessary, instead, what is the “absolutely conventional” way to do science is “empiricism, abduction (inference to the best explanation), and Bayesian reasoning”. I’d never heard of abduction as a basis of science before. If you believe Wikipedia, this goes back to Charles Sanders Peirce, whose view in later years was:
Abduction is guessing. It is “very little hampered” by rules of logic. Even a well-prepared mind’s individual guesses are more frequently wrong than right. But the success of our guesses far exceeds that of random luck and seems born of attunement to nature by instinct (some speak of intuition in such contexts).
As for “Bayesian reasoning”, I would have thought that Polchinski’s Bayesian calculation of an “94% chance” of a multiverse would have conclusively shown the absurdity of that.
The fundamental problem with trying to apply Bayesian reasoning to the multiverse is that it requires you to presume a multiverse to begin with. If there’s only actually one universe, it’s meaningless to talk probabilistically about the properties of that universe.
(sp) Peirce
Peirce is simply describing the process by which we seize on an initial hypothesis or model. That will in practice be constrained by all sorts of previous experiences with the phenomenon in question but in principle our choice can be very wild indeed, revealed in a dream or stepping off a bus or whatever it may be. The only real test of the hypothesis or model comes by way of the deductive consequences that follow from it and the inductive confirmations or falsifications that follow those. The word abduction is a clumsy translation of Aristotle’s apagoge from Prior Analytics 2.25.
What exactly would one be able to learn about the real world using a theory that cannot make predictions that are falsifiable?
Jon Awbrey,
Thanks, fixed.
Felipe Pait,
By Carroll’s notion of science, I think he can claim to be learning about the real world, but I don’t think he can answer the question “How will you know if you are wrong?” Empiricism, abduction, Bayesian reasoning, etc. will get you to a model that seems to you to describe the real world, but how will you know if you’re mistaken? It is this ability of experiment to shoot down a model that you are fond of that seems to me to be the most distinctive part of science, and I don’t see that anywhere in Carroll’s conception.
Because Darwin wasn’t there when his Evolution occurred– he used abduction (inference to the best explanation) to formulate his theories.
There has always been a lot of misunderstanding about the requirement of falsifiability. At root it is simply the idea that an empirical law is not a logical tautology. I don’t see any reason to dispense with that just yet. In practice the principle affords us leverage only when we have two or more competing theories for the same domain.
When you have to change the definition of “science” (decades into your research, especially), it means you are no longer a good, honest scientist. Period. (It’s much the same as changing the data to fit the theory.) Throw those criminals out of science.
I don’t think it’s fair to judge Bayesianism by Polchinski’s text, which was indeed absurd.
I think you’re right, though, to be wary of scientists invoking Bayes as a get out of jail free card for their ideas. Bayesianism provides a strict framework for updating degrees of belief in light of data by a so-called Bayesian evidence. Polchinski did not use that framework – he just pulled numbers from thin air.
It seems that many wish to claim their arguments are Bayesian or that their theories are supported by Bayes, but actual calculations seem beneath them. If they don’t make the calculations showing their theories to be more plausible than alternatives, they shouldn’t be allowed any refuge under the cover of Bayes.
A tangential note that Carlo Rovelli has just become the editor of Foundations of Physics, http://www.springer.com/physics/journal/10701, replacing Gerard ‘t Hooft. It will be interesting to see how the journal changes.
Another thing that needs to be understood is that Bayes reasoning or anything that involves probabilities has anything to do with the initial abduction, which takes us from a state of unquantifiable uncertainty to the first hypothesis of a model category or reference class. It is only after these choices are made that speaking of probabilities becomes possible.
Correction: has nothing to do with
Peter,
I think you meant to quote “Polchinski’s Bayesain calculation” as a “94% chance” of a multiverse; just makes it sound all the more absurd.
Nick M.,
Thanks, fixed.
I probably shouldn’t judge a talk by its abstract, but the more I read Carroll’s the more Wikipedisch it sounds. The phrase “inference to the best explanation” was coined by Gilbert Harman in his attempt to explain abductive inference but it conveys the wrong impression if anyone takes it as a substitute for the whole course of inquiry rather than just its starting point. Peirce himself was always very clear about this.
150 years ago the dominant theory was that particles are vortices in the aether. What a pity that the scientific method forced us to replace this empirically superior understanding with quantum mechanics and relativity.
Carl:
“The fundamental problem with trying to apply Bayesian reasoning to the multiverse is that it requires you to presume a multiverse to begin with”
No. It only requires a non-zero ‘probability’ for a multiverse to begin the algorithm, and it can be as small as you like (e.g., 10^{-100} but make sure you are using double double precision in your computer program!). Here ‘probability’ is a number between 0 and 1 representing your subjective ‘belief’ in the multiverse ( calling it a probability just because it takes values between 0 and 1 is a mistake because it muddles subjective belief with frequentist probability). Bayesian analysis tells us we can have confidence in the existence of a multiverse if the flow of all relevant information into the Bayes algorithm has driven this ‘probability’ sufficiently close to 1 that everybody agrees there is a multiverse (0.94 is clearly no where near close enough for readers of this blog!). If someone still rejects the multiverse they should state just how close to 1 they need for ‘belief’ and if the belief reaches that value at some point in the future then accept the multiverse. On the other hand if the belief is driven sufficiently close to 0 then the multiverse hypothesis can be rejected in the same spirit. Or they can reject the whole idea of Bayesian analysis (which by the way is at the root of a major part of modern technology nowadays).
Sm,
I would begin the algorithm with exactly zero initial probability for the multiverse, since it reflects my belief that there is no multiverse to begin with. So how do you continue the calculation from there? Is there any hard data regarding the multiverse that can move the zero prior even to 0.5, let alone 0.94? I don’t think so.
🙂
Marko
Falsificationism came to replace verificationism as it became obvious that general statements, e.g. “all electrons in the universe are identical”, are not really verifiable. Thus the burden of proof is shifted: it is the opponents/deniers who have to provide evidence. And evidence can be disputed, e.g. do muons disprove the general statement or are they something else?
The existence of a multiverse is obviously not a general statement but an existential one: its supporters are supposed to provide evidence and not just beliefs.
We can see Science devolving: originally it was ‘How the world is’; that became ‘How do we know it’ and next ‘Why do we believe it’. The Bayesian argument ultimately reduces knowledge to the best one among of our beliefs – which is no more than to say ‘Just now we are unable to imagine something better’.
a 1, that’s right: Bayes tells you to believe the most plausable theory you can find and provides a recipe for evaluating the plausibility of a theory. What more do you want? Do you want Bayes’ theorem to invent new hypotheses for you or prove no-go theorems that preclude viable hypotheses?
I really think you’re all barking up the wrong tree by criticising Bayes itself. I’d advise scrutinising the Bayesian multiverse arguments – do they actually calculate anything? Can you express their arguments as equations in terms of probabilities? – so far the arguments strike me as being rather specious and not calculations within the framework of BCT.
In other words, accept the use of Bayes, and ask does Bayes actually give support to eg the multiverse? My feeling is that there are difficulties, to say the least. I’d like to see what eg Caroll has actually calculed when he says these things.
vmarko:
‘I would begin the algorithm with exactly zero initial probability for the multiverse,’
Not only an incorrect application of the algorithm, but also a pretty religious and dogmatic position you adopt, if I may say so!
Wouldn’t a more scientific approach be to take a small prior (not really necessary though) and watch the information flow in the fullness of time drive that prior to zero, thus confirming your initial belief, if in fact that happens? Also, I wonder what would have happened to a large fraction of the Atlas/CMS higgs analyses had they taken a zero prior for the existence of the Higgs!?
Perhaps a more thoughtful criticism would be that the multiverse proponents have not (yet?) focused on a definite method for calculating the likelihoods. Unless I am mistaken, this is the point Peter is always making.
I think that is correct. Bayes’ theorem is a deductive identity that adds no information to the situation, nor is that its job. It does not add rows or columns to the contingency matrix or make the observations that populate its cells. Those are jobs for the independent capacities of abductive and inductive reasoning.
If you have enough evidence for your hypothesis, Bayesian reasoning and ordinary statistics will give the same results.
So applying logic, this means that the only time you need to use Bayesian reasoning is when you don’t have enough evidence for your hypothesis. It seems to me that this statement also has quite a bit of empirical evidence supporting it.
The problem is not just with the tiny but non-zero subjective belief in the multiverse. Any feature of the world that occurs only in only some branches of the multiverse does not drive the subjective probability of the multiverse any higher, because of the very large number of branches of the multiverse. Features of the world (say, Lorentz invariance) that are true for almost all branches of the multiverse (e.g., in string theory) do increase the Bayesian subjective belief of a multiverse; but also make the multiverse scientifically unnecessary.
Pingback: Beyond Experiment: Why the scientific method ma...
Sm,
I choose the zero prior because the multiverse idea isn’t testable experimentally, by definition. It’s like with religion, an atheist chooses zero prior for the existence of God not because he wants to apply Bayesian analysis to God, but because Bayesian analysis isn’t applicable (no new data can ever be provided for or against the existence of God, by definition).
In contrast, things like the Higgs hypothesis, the GW hypothesis and low-energy SUSY hypothesis are all highly testable, and the new data (respectively observed, observed, not observed) can change the prior using Bayesian analysis.
The point is that new *data* can change the prior, while new *beliefs* cannot. If we were to use beliefs as data-points, the existence of God would be very supported by Bayesian analysis. Ditto for the multiverse. So one needs to be careful to apply Bayes only to testable hypotheses.
🙂
Marko
Karl Popper knew something about falsificationism. And he knew that by its standards Darwin wasn’t a scientist and natural selection etc. wasn’t science. Somebody managed at last to cobble up some minor experiments, or maybe hypothetical experiments, so that he finally agreed to retract this rather embarrassing conclusion.
As an historical science, evolutionary biology isn’t very big on the properly falsificationist experiments. It tends to be big news when someone actually conducts a controlled experiment in evolution. In historical sciences like evolutionary biology, geology and cosmology, experiments tend to make assumptions about how laboratory experiments demonstrate physical principles at work. Of course, this evades the problem of induction: How do we know that an experiment on the behavior of perovskite in a lab really provides a valid generalization? Here too falsificationism condemns the historical sciences as unscientific. (Popper wanted to condemn all history as unscientific because he disliked Marxism, being an extreme right winger, one of the founders of the notorious Mont Pelerin Society.)
On the other hand, it is routine to find perfectly good falsificationist experiments in the works of such scientists as J.B. Rhine and Michel Gauquelin. Their tradition lives on in whole fields, like parapsychology and evolutionary psychology.
So, I’m sorry but until it’s possible to clarify some of the peculiarities in the notion of “falsification” as exemplified in its great champion, the assumption that it’s enough to cry “Unfalsifiable!” isn’t quite acceptable to everyone. It really does seem to me the shoe is on the other foot. After all, by falsificationist perspectives, how can the widespread use of models be justified? When is adjusting a parameter pursuing a failed theory?
As I understood it, abduction is probabilistic reasoning. Since it seems that learning how things really are (which is what science seems to be so far as I can tell,) instead of proving a logically necessary a priori by deduction, that doesn’t seem to be a bad thing.
When I understand Bayesian reasoning (as opposed to Bayes’ theorem,) I’ll decide what I think about that.
sm, I think you must not have read Polchinski’s argument (which is in arXiv:1512.02477). He calls it “quasi-Bayesian”, but the actual argument is “I’ll start with a 1/2 prior probability, and multiply it by 1/2 for each of these three reasons to believe in a multiverse”.
If you actually wanted to use “Bayesian” techniques to figure out the odds of a multiverse, I suppose you’d start with
P(M|E) : P(¬M|E) = P(E|M) P(M) : P(E|¬M) P(¬M)
where M is “there’s a multiverse” and E is the evidence and : is a ratio representing relative odds. He takes P(M) = P(¬M) = 1/2 as his prior. I guess his three reasons to believe in a multiverse with a 1/2 probability assigned to each means he wants P(E|¬M) = 1/8. If you take P(E|M) = 1, the odds for a multiverse are 8:1 ≈ 89%. Other values of P(E|M) give lower odds.
Setting aside that Polchinski did the math wrong and showed no sign of understanding what Bayesianism even is, the main problem is that P(E|¬M) is made up out of nowhere. It’s similar to what happens with the Drake equation, where the argument boils down to “it’s clearly true that a/z = (a/b)(b/c)…(x/y)(y/z), and we know some of those factors accurately, and others approximately, and the rest not at all so I’ll make up some values, and plugging in all of that we get that a/z is high (or low depending on my preconceptions), and you can trust it because it came out of this self-evidently correct equation, and I totally didn’t tweak my guesses until I got the answer I wanted”.
There’s obviously a well-defined sense in which the Bayes equation is correct. When people disagree with what you get out of it, they’re disagreeing with what you put into it.
vmarko, a prior of 0 in Bayesian terms means you’re impervious to any evidence whatsoever, even the end of the world happening exactly as prophesied in Revelation. sm is right to make fun of you for that.
re: Sean Carroll, is there a reason to prefer multiverse to intelligent design? can Bayesian reasoning provide a probability for ID?
steven johnson,
“the assumption that it’s enough to cry “Unfalsifiable!” isn’t quite acceptable to everyone”
This is a straw man argument. No one, Popper included, claims that falsifiability is all there is to this issue. I’ve written extensively about this, including a chapter in my book, and endless discussions here.
About bringing evolution into this, do you really want to go there? Carroll’s argument that the string theory multiverse is “straightforwardly scientific and can be evaluated in absolutely conventional ways” is in danger of being interpreted as a claim that untestable multiverse theories are of the same nature as the testable theory of evolution (some other string theorists have explicitly made this argument). The ID people love this…
Economists have grappled some with the applicability of formal probabilistic decision theory (especially Bayesian expected utility maximization) to different types of questions. A distinction often cited by anti-mathematization advocates is one Frank Knight made between “risk” (where probabilities can be reliably assigned to relevant events) and “uncertainty” (where they cannot). The usual Bayesian riposte is that a probability distribution of probabilities still yields a probability distribution.
More fundamentally, along the lines of what I think Jon Awbrey is getting at, Ken Binmore pointed out that even the leading developer of subjective expected utility theory, Leonard Savage, in 1951 made a distinction between “small world” questions where it made sense to assign probabilities to events and “large world” questions where it did not. The multiverse question seems to me to be the largest large-world question possible.
http://else.econ.ucl.ac.uk/papers/uploaded/266.pdf
I think some people don’t realize just how destructive the multiverse enterprise is. In my case, it literally turns me away from physics. I feel depressed that the greatest joy in my life has become a complete joke.
I have no horse in this race (cat in this box?) as far as multiverses and polycosmoi go — I will limit myself to clearing up popular confusions about Peirce’s concept of abductive reasoning. Analytic philosophy swayed many people into thinking that science could be reduced to purely deductive reasoning, eliminating induction and ignoring abduction, but Peirce was a practicing scientist who worked outside that warp. In his model of the inquiry process abduction is at root logically prior to any discussion of probabilities, however true it may be that all three modes of inference work in tandem to advance any moderately complex investigation. There’s more information on the history and function of abductive inference in the following article:
• Functional Logic : Inquiry and Analogy
See especially:
• Section 1.2. Types of Reasoning in C.S. Peirce
• Section 1.4. Aristotle’s “Apagogy” : Abductive Reasoning as Problem Reduction
Ben,
I am fully aware what a zero prior means, I picked it up deliberately. But I guess the point I was trying to make didn’t really get accross. I assign zero prior to the existence of multiverse, of unicorns, of the Loch Ness monster, and of UFO implants in people’s necks. The reason why I do that is because none of these are any part of testable and falsifiable science, so whether one believes in those is a matter of personal choice, and only priors of 0 and 1 make sense. That is, one either believes in this (uncritically) or one does not (also uncritically). Either way, it is not a subject of Bayesian analysis.
Some people are trying to spin the multiverse idea as something scientific, or at least motivated by well-established science. This is misleading – what string theorists study is a conjecture on top of a generalization on top of a conjecture on top of an extrapolation on top of a generalization of existing established well-tested theories of physics. And when this whole edifice of theoretical construction falls short of their initial expectations (i.e. the landscape problem), they invent yet another layer of extrapolation which provides a catch-all panacea-like answer to any remaining unanswered question: “multiverse did it”. And when I refuse to assign a nonzero prior to that, people think that I don’t know what a zero prior means…
🙂
Marko
Justin,
It’s not sensible for the multiverse research to turn you away from physics. The vast majority of physics research out there has nothing to do with the multiverse.
All,
I’m deleting further attempts to debate vmarko’s point, which are going nowhere.
You are doing a disservice by convincing gullible people that confirmatory experiments are the only way to make progress in science.
Butters,
Another straw man argument that I’ve never made.
Pingback: Abduction, Deduction, Induction, Analogy, Inquiry : 5 | Inquiry Into Inquiry
Peirce is spinning in his grave regarding Carroll’s conflation of Peirce’s notion of abduction and Bayesian inference. Peirce spent much of his life fighting the “inverse probabilists”. His collected writings still represent the most compelling, systematic, and sustained arguments against Bayesianism. His critiques (made over decades) also happen to be among the earliest criticisms of inverse probability (along with his contemporary Venn, but Peirce’s views were even more systematic).
How is abduction any different from belief bias ou confirmation bias?
I think the issue here is what constitutes “proof”. The idea behind abduction, or “inference to the best explanation”, is problematic only if we think of it in these terms rather than likelihoods. Given two or more inconclusive theories offered as explanations of some body of evidence, testable or not, we cannot claim to have proven any of them via abduction or Bayes reasoning. But we can use such reasoning to claim that one or more of them are more likely to be true given what we know of the world so far. And the more evidence we gather that favors one over the others, the more reason we have to be confident in it. If the body of evidence becomes so large that all but a handful of cranks would favor another alternative, then perhaps we could start calling it fact, but not before. In any event we shouldn’t be calling it science until it’s at least testable in principle.
The key is to treat abduction as a measure of confidence rather than proof, and Bayes reasoning formalizes this approach well. Problems only arise when we try to apply it numerically to situations where the background and probability spaces involved (particularly the probability of the evidence obtaining if the hypothesis isn’t true) are ambiguous and difficult or impossible to quantify. But even if we can’t meaningfully apply numbers, we might still be able to frame a general argument in Bayesian terms. For instance, I don’t see how one could calculate the probability that NASA faked the moon landings on a sound stage at Norton Air Force Base in San Bernadino, but I doubt anyone would dispute that the likelihood of the evidence given the background alone is small enough to merit rejecting that theory.
In the case of the multiverse, the issue isn’t Bayesian methods per se. It’s the fact that in the absence of a rigorously understood mechanism for actually producing one it’s all but impossible to pin down the sample space involved, or even to define a meaningful probabilistic measure when we can only observe our universe. The same is true with the string landscape. With Bayesian analysis everything depends on how one defines the reference classes and how they’re discriminated from the background. The biggest problem with Carroll’s “empiricist” approach is that within a Bayesian framework, he wants to treat theoretical “elegance”, or “beauty” as evidence that infers to the best explanation. Stunningly gorgeous (in his opinion) would passes for proven fact. On this logic, “my wife is hotter than yours!” would pass for “science”. Really…? Even if he were married to Miss July, and I to a roller derby queen who lost a head-butting contest with an East African rhino, how exactly does one formally demonstrate such a claim? History is strewn with the wreckage of heart-achingly beautiful theories that ran afoul of the real world, and as Edmund Burke once said, those who are ignorant of the past are doomed to repeat it.
Let me just say again that abduction is not “inference to the best explanation” (ITBE). That gloss derives from a later attempt to rationalize Peirce’s idea and it has led to a whole literature of misconception. Abduction is more like “inference to any explanation” (ITAE) — or maybe following Kant’s phrase, “conceiving a concept that reduces the manifold to a unity”. The most difficult part of its labor is delivering a term, very often new or unnoticed, that can serve as a middle term in grasping the structure of an object domain.
Justin says:
February 28, 2016 at 9:47 pm
“I think some people don’t realize just how destructive the multiverse enterprise is. In my case, it literally turns me away from physics. I feel depressed that the greatest joy in my life has become a complete joke.”
How odd.
Aside from the tiny part of physics phase space that are the completely speculative parts of astrophysics and HEP, the rest of physics has never been healthier.
No lack of interesting questions to work on in condensed matter, optics, soft matter, biophysics, experimental astrophysics, etc. all of which still adhere to the only way to do science – the scientific method.
I’d leave theses “angels on a pin” arguments to others and find a more productive and rewarding way to spend my life.
Jon Awbrey,
Perhaps “inference to the best explanation” wasn’t what Peirce had in mind, but for better or worse that seems to be how the term “abduction” is most commonly used today. The Wikipedia page on Abductive Reasoning even defines it as such. In any event, I’m not particularly attached to that label so if it isn’t the best one feel free to toss it. 🙂 My only intent was to draw the distinction between confidence and proof in how we adjudicate between theories and what passes for science.
Pingback: Abduction, Deduction, Induction, Analogy, Inquiry : 6 | Inquiry Into Inquiry
Justin: “I think some people don’t realize just how destructive the multiverse enterprise is. In my case, it literally turns me away from physics. I feel depressed that the greatest joy in my life has become a complete joke.”
For me, a 73 year old professional scientist that still remembers my fascination reading about physics and physicists in Scientific American 60 years ago, it is more of an “Emperor has no clothes” moment. I now realize that theoretical physicists are no smarter, honest or perceptive than many of the scientists in my own much more pedestrian field.
Sorry if this has been discussed already. Is Carroll still advocating Weinberg’s “prediction” of the cosmological constant as a pro-argument to the multiverse? I forgot about this argument but now reading it again it’s pretty clear to me I have no idea what this argument has to do with multiverse. I get the environmental argument, but that’s it. Specifically:
“If the universe we see around us is the only one there is, the vacuum energy is a unique constant of nature, and we are faced with the problem of explaining it. If, on the other hand, we live in a multiverse, the vacuum energy could be completely different in different regions, and an explanation suggests itself immediately: in regions where the vacuum energy is much larger, conditions are inhospitable to the existence of life. There is therefore a selection effect, and we should predict a small value of the vacuum energy. Indeed, using this precise reasoning, Steven Weinberg did predict the value of the vacuum energy, long before the acceleration of the universe was discovered.”
The argument that a small cosmological constant means necessarily a multiverse beats me…
Bernard,
The reason can be found in this talk given by Nima Arkani-Hamed,
entitled “Why is There a Macroscopic Universe?”
https://www.youtube.com/watch?v=F2Fxt_yCrcc
Peter,
In that same talk, Arkani-Hamed said that people who are quick to
dismiss the multiverse as a possible solution to the CC problem have
never actually worked on the problem. What do you think about this?