Normally I do my best to ignore claims to have figured out the vacuum energy problem. There’s an endless number of them, mostly looking pretty dubious, and the world is full of people much more expert on the subject than me, so it seems that my time would be better spent elsewhere. I did however just notice a new preprint making such claims, Scrutinizing the Cosmological Constant Problem and a possible resolution, by Denis Bernard and André LeClair, and am curious about it. Unlike most of such things, the authors seem to know what they are talking about, and the whole thing looks not implausible to my non-expert eye. Can an expert tell me what is wrong with this (or, alternatively, tell me and my blog readers that it’s a new good idea, or an old one that is not well known)?
Comments better be about the Bernard/LeClair paper and well-informed, or will be ruthlessly deleted.
Joe Polchinski has an interesting discussion of various theories of the cosmological constant at the beginning of his talk. In particular, page 4 seems relevant in the current context.
I haven’t read this paper. I seem to recall however Claudio Dappiaggi making the point that if you do your qft in curved space carefully, the cc appears as a renormalization constant and there’s really no reason for it to be at the Planck scale. Forgot details though. He’s got several articles on the arxiv, but I don’t know which one might be useful. This might give you a flavor.
As I understand their argument (and I may be wrong) the vacuum energy is supposed to be proportional to k_c^2H(t)^2 at all times. If the momentum cutoff k_c is taken close to the Planck scale, this means that \Lambda is proportional to H(t)^2, so one obtains the correct order of magnitude relationship \Lambda \propto H_0^2 today.
If that summary is correct then – without reference to the field theory arguments – one can rule it out empirically. As this paper points out in a footnote (it was probably well known before this), having \Lambda \propto H(t)^2 at all times amounts to a renormalization of M_P in the Friedmann equation; this is inconsistent with nucleosynthesis.
Sesh,
I think you’re right about \rho_{vac} ~ H(t)^2 being assumed for all time t, although strictly speaking there’s also a \ddot{a(t)}-dependence; at least, that’s also how I understand it from equations (24) and (35). Looks like their proposition is ruled out then…
NB: The footnote Sesh refers to is on page 3 of the preprint to which he provides a link.
I don’t think there’s really anything new in this paper. They claim to propose the “new” idea that Minkowski spacetime should be a stable solution to the semi-classical Einstein equations. In the discussion section:
“…we proposed the principle
that empty Minkowski space should be gravitationally stable in order to fix the zero
point energy which is otherwise arbitrary.”
But this idea was proposed long ago by Robert Wald, it might even be older than that.
See page 7 of
http://www.springerlink.com/content/v8k661364hv52444/
Its also in the lecture series he gave, “Quantum FIeld Theory in Curved Spacetime and Black Hole Thermodyanmics”
The “cosmological constant problem” is really not one but two problems. The first one is, “why is the cosmological constant 120 orders of magnitude smaller than we expect from QM?” The second is, “since the cosmological constant is constant and density of the universe varies by the factor of 10^120 over the course of history, why is the cosmological constant the same order of magnitude as density of matter today, what’s so special about today”? (The second question is also known as “the coincidence problem”.)
It is tempting (especially for particle theorists) to propose, like here, that the constant is not a constant but a variable that scales as some function of time (or H(t), a(t), etc.) A quick arxiv search will yield lots of hits. A choice of Lambda ~ H(t)^2 is particularly convenient because then Lambda is, in essence, defined to be forever proportional to the density of matter, simultaneously solving questions 1 and 2. Numerical coincidences in the article (e.g. (12) ) are the manifestations of this definition.
Unfortunately, as Sesh points out, these solutions are constrained by Big Bang Nucleosynthesis as well as by observations of supernovae and CMB. While some degree of variation is permitted, the naive “no-coincidence-problem” H(t)^2 has been ruled out observationally. (E.g. http://arxiv.org/pdf/astro-ph/0702015v3.pdf) Among other reasons, the fraction of dark energy, currently ~0.7, is constrained by BBN to be <0.05 at T~1 MeV.
Their article says that H(t)^2 is only the rate of change in "later" times, but, as far as I understand, their idea of later times go almost all the way to Planck time.
My personal opinion is that, in order to resolve the coincidence problem, we have no choice but to resort to anthropic principle. According to an old argument by Weinberg, there simply wouldn't be any observers in the universe if the cosmological constant were even three orders of magnitude larger than it is now.
P.S. I don’t find the statement in the footnote of Sarkar’s article convincing. The reference cited states that BBN constrains Newton’s constant to within 10% of its current value at the time of BBN (~100 s). If Lambda~H^2, that puts Lambda at ~ 1e-16 GeV^4 at that time. I don’t see it how this value would result in a greater than 10% shift in G_N or M_P at the MeV scale, and it is not explained in the article.
On first impression, this paper does not seem very original, it is observationally ruled out by assuming a time dependent vacuum energy, and appears to miss all the important problems of the cosmological constant problem; for instance, the higgs potential adds a ~(100 GeV)^4 contribution, which would have existed in the early universe; why should that just disappear? I don’t think they address any of the real issues, and seem to sweep all insights from effective field theory aside, which is to ignore most of the relevant discussion.
A cosmological constant that decreases with time, or increases with the Hubble constant, is NOT ruled out observationally. I have discussed this issue with all astrophysicists I could get hold of, and they all agreed on the point. Observationally, nothing is known about the time-dependence. (Are there any new developments? Can anybody comment?)
There might be theoretical arguments from nucleosynthesis that suggest problems, but these arguments are NOT observational; they just show that one idea contradicts another. Now, we know that QFT is wrong at high energy, because QFT predicts a huge vacuum energy, whereas measurements show that it is small. Thus QFT is wrong at high energy; and it makes no sense to use a theory that makes wrong statements as an argument against solid observational data – even at the time of nucleosynthesis.
I agree with their statement in the abstract “In classical and quantum mechanics without gravity, there is no definition of the zero point of energy.” However, I think one should take this more seriously.
First one has to admit that the usual argument that “QFT predicts a large cosmological constant” is probably not very convincing as we can only really motivate a bold Planck scale cutoff from effective field theory ideas after we understood well what is going on “beyond the Planck scale”. From a different perspective one could just say that the fact that the cosmological constant is computed to be ridiculously large in QFT does not imply that QFT is wrong but that the computation does not make sense / is based on false assumptions.
Instead it seems convincing to take the above mentioned point of view that there is no way to set a zero point on the energy scale if one considers the coupling of matter to gravity. In the lab, we compare the energy in different states, e.g. in a two particle state and the vacuum, thus we measure energy differences. The Einstein equation though measure the absolute energy (density) of whatever fills the universe. Within QFT we can not define the corresponding zero point unambiguously, thus we have to admit that any definition / computation is correct up to that ambiguity, which can be argued to parametrised by four parameters (c.f. Robert Wald’s paper mentioned by Alex). One of those is the Newton constant, and another the cosmological constant.
Thus for me the most convincing attitude towards the cosmological constant is that it is to the best of our knowledge just a constant of nature we have to measure, much like the Newton’s constant. Of course we might hope to predict it within a quantum gravity theory at some point, but not within QFT; just as QFT does not predict the Newton’s constant either.
To the second Nameless: the first Nameless has answered your question already. (Unless you are both the same Nameless, in which case the source of confusion is unclear to me.)
Put simply, if \Lambda~H^2 at all times, then dark energy constitutes a large and constant fraction (~0.7) of the energy budget at all times, including at nucleosynthesis, thus drastically changing the expansion history. Since \Lambda~H^2 two terms in the Friedmann equation can be rearranged to give something that is essentially the same as the Friedmann equation with no cosmological constant term but with a rescaled Newton’s constant or M_P. So the observational constraint on the allowed shift in Newton’s constant at the time of BBN is directly translatable into a constraint on the fraction of the energy density that was in dark energy at that time.
@ Sesh: we’re all the same Nameless (not to be confused with Anonymous). Your argument is valid, the source of confusion is the term “renormalization”, which normally means something entirely different.
@kurt, you are simply wrong. While of course the dark energy can be an extremely slowly varying function of time, i.e., there are always error bars, the error bars are not big enough to allow Lambda~H(t)^2. This would ruin, amonst other things, large scale structure formation, it would mess up the CMB data, and so on.
@Bob/Kurt, if I’m not mistaken, from the early-universe perspective, Lambda~H^2 is just outside the error bars. Constant Lambda of 10^-47 GeV^4 is effectively zero in the early universe (it is vanishingly small compared to matter and energy density), and even a time-dependent Lambda that falls off slightly slower than H^2 would be so much smaller than matter density at the time of CMB decoupling as to be unobservable.
Astronomical observations are a different and complicated story. There are numerous different models (e.g. is the cosmological constant simply a number that changes with time? or is it a new energy field? If so, what is its equation of state, and does it interact with, or decay into, dark matter?) And observational points are fairly noisy. I’ve seen claims that most of the parameter space is excluded, and I’ve seen claims that a model where Lambda~H and the cosmological constant field decays into dark matter is supposed to fit observational data slightly better than the standard Lambda-CDM model.
Nameless:
The “cosmological constant problem” is really not one but two problems. The first one is, “why is the cosmological constant 120 orders of magnitude smaller than we expect from QM?” The second is, “since the cosmological constant is constant and density of the universe varies by the factor of 10^120 over the course of history, why is the cosmological constant the same order of magnitude as density of matter today, what’s so special about today”? (The second question is also known as “the coincidence problem”.)
There is a critical review on both of these problems given in 1002.3966. In short, the authors argue that the coincidence problem is not actually a problem, because the “today” covers a period of ten billion years (or so), so there is no unnatural coincidence happening at all. As for the QFT prediction for Lambda which is off by 120 orders of magnitude, they argue that there is no mystery in that either, since we do not know how to perform renormalization in curved-space QFT. So the 120 orders of magnitude is a problem of applying flat-space QFT in a regime where it is obviously not applicable, and then wondering why the result is wrong.
I think it is an interesting paper to read.
Rovelli and Bianchi seem to be arguing that since the physical effect of the cosmological constant cannot be observed at small scales on which space time is approximately flat, and thus QFT calculations of the c.c. (dominated by much shorter length scales) should viewed with extreme skepticism.
@Bob, your answer mixes two things. It might be that experimental measurements do not allow a slowly varying Lambda. But structure formation of galaxies is NOT a experimental measurement, but a theoretical model.
My point is: is a slowly varying Lambda against measurements? You might be right that it contradicts CMB data, though I know people claiming that it doesn’t (at least for Lambda proportional to H).
Nobody cares whether a non-constant Lambda contradicts some model – that model might be wrong. The issue is whether it contradicts some actual measurements of Lambda at different distances/times. I think this is the main issue to be settled. Writing “and so on” is not an argument for the constancy of Lambda.
Hi guys,
you can see a nice discussion of Rovelli/Bianchi paper on cosmocoffee.
http://cosmocoffee.info/viewtopic.php?t=1531
@kurt, your response is very strange when you say “nobody cares whether some non-constant Lambda contradicts some model”, when that is the whole point of this blog entry! LeClair and bernard laid out a MODEL, namely a universe with Lambda~H(t)^2, plus ordinary GR, ordinary matter and radiation. So what does their model predict? Well it dramatically changes CMB data, large scale structure, weak lensing, lyman alpha forrest, etc. It is very strange that you would keep defending their claim despite all the evidence pointing against it (and this is setting aside all the theoretical problems of Lambda~H(t)^2).
@Nameless, not sure what the reason for your comment was — you say that if dark energy is less than H(t)^2 then it is observationally allowed by being orders of magnitude smaller than the rest of the energy of the universe at time of CMB. ok…but that is NOT the LeClair/Bernard model that is the topic of the blog entry. They claim, as Sesh quite rightly points out, that dark energy is ~ H(t)^2 and so remaining 0.7 the energy budget of the universe at all times. This is not just outside the error bars as you strongly claim, but is clearly ruled out by a range of observations.
Sesh:
On p.9 they make this statement: “We also argue that when H=a(dot)/a is large, the first Friedman equation sets the scale H~k_c, which is the right order of magnitude if k_c is the Planck scale.”
I’m over my pay-scale here, but I’m wondering if that solves the dilemma you point out above.
As Bee and others have already pointed out, any attempt to solve the cosmological constant problem beggs the question “Is there an actual problem to be solved?”. I think that failure to answer the latter question positively dooms any remaining arguments to inconsequence. To the point, the “size” of the cosmological constant can only be sensibly assessed if one had at least two physically meaningful quantities to compare. However, the comparison is usually only made between the measured value (physically meaningful) and a bare, cutoff-dependent, unobservable parameter (not physically meaningful). In short, I think the whole enterprise fails due to the inability to meaningfully state what the “problem” is. This is discussed in more detail in the previously linked CosmoCoffee thread.
Also, the idea of fixing the cosmological constant (or any other renormalized parameter) in a way coherent across different spacetime backgrounds is definitely not new and is used in an essential way in the modern understanding of renormalization in curved spacetimes. In particular, the application of this idea to the cosmological constant was discussed in this Gravity Research Foundation essay by two major contributors to that field:
Quantum Field Theory Is Not Merely Quantum Mechanics Applied to Low Energy Effective Degrees of Freedom
Stefan Hollands, Robert M. Wald
http://arxiv.org/abs/gr-qc/0405082
The upshot is that, according to the authors, the result (which does not depend on any cutoffs) is actually much much smaller than the observed value. In other words, even in that setting, the cosmological constant needs to be introduced as an external parameter, with a value that could not have been deduced other than from experimental input.
Thanks to all for the interesting comments about the paper. Please, no more comments that aren’t explicitly about that paper.
These comments have been informative, thank you. But I feel I should clarify at least one point. The conversation got side tracked by the idea that we proposed the vacuum energy was proportional to H^2 and there is a problem with this varying in time. Perhaps this was gleaned from the abstract, but it is not what we proposed at all, and it is nowhere written in the paper. Rather the result we do propose is that the vacuum energy is proportional to this quantity “A”, which is a linear combination of H^2 and \ddot{a}/a. A major point of our paper is that consistency requires that this is indeed independent of time, which turns out to be the case in a matter dominated era. We are not at all well informed on BBN etc, but the remarks above about the vacuum energy being proportional to H^2 and that being ruled out are not very relevant to what is in our paper. We certainly cannot claim a solution to the CCP, there could very well be some problems with our proposal that are not obvious now, at least to us, but I don’t think this is one of them. Thank you for the interest in any case.
LeClair, the reason people are not persuaded by these arguments is 2-fold
1). In your paper you have a contribution to the vacuum energy which goes as kc^4, with kc~Mp, which you simply dispense with. This really sweeps the whole fine tuning problem under the rug, because the CCP is the question of why that term cancels.
2). The next term you focus on is kc^2 R. And you claim that this gives the right order of magnitude for R~H^2, with H evaluated today. This really sweeps the whole coincidence problem under the rug, because this just assumes a coincidence between the dark energy and the matter density today.
The reason everyone was thinking Lambda~H(t)^2 is because it seems you can play the same game at ANY moment in time. Why is today special? Why not redo your arguments at the time of CMB? Then you will get the wrong answer because H changes in time.
OK, then it looks like everyone, including myself, totally misunderstood the article. (The statement that Lambda~H^2 is a logical, but, apparently, wrong interpretation of the text on page 8.)
According to your definition and the Friedmann equations, A = 8 pi G (Lambda – rho_radiation). For any function of A that vanishes in Minkowski spacetime, as long as there’s no radiation and you discard higher-order terms in A in the Taylor expansion of that function, its value will be constant in time. It would be more interesting if one could calculate rho_vac ~ A exactly, with vanishing higher-order terms.