There’s an article in the Chronicle of Higher Education about the Quantum Diaries Webloggers. It points out that since this project is organized by the high energy physics labs, it “presents a sanitized version of life in high-energy physics”, with one of the participants quoted as saying “None of us wants to be responsible for saying anything negative.” Well at least there is one weblog dealing with high-energy physics where things are not sanitized….
From Gordon Watts, one of the Quantum Diaries webloggers, I learned about the Tevatron Connection Program, which brings together theorists and people from the CDF and D0 experiments at the Tevatron. It seems to me that new results on the top mass were first made public this past weekend at this conference (see here and here). The latest combined CDF/D0 result is (“pending final CDF/DO review”):
top quark mass = 174.3 +/- 3.4 Gev
compare to the previous result using Run I data of
top quark mass = 178.0 +/- 4.3 Gev
In the standard model, the new top quark mass implies a value for the Higgs mass of 94 +54/-35 Gev. Note that much of this range is excluded by the LEP result that the Higgs mass must be above 114 Gev. In the minimal supersymmetric standard model with this top quark mass, getting the Higgs mass above 114 Gev requires making the superpartner of the top quite heavy, introducing a certain amount of fine-tuning into the theory. For more about this, see a posting by Jacques Distler.
Why is the matrix so large in the first place, here as opposed to elsewhere
Because you can’t do local updates. For the gluon action, you’re locally updating the gauge field. So at any point, you only need to know the nearest neighbours. Whereas with the quarks, you’ve “done” the path integral, so it cares about every gluon field, at every point.
Put more simply, you’re not updateing the “quark field at point x” you’re computing the entire effect of the quark field at once.
Also, it seems to me if you get good precision in the higher mass calculation, the scaling would naively appear to be linear in the eigenvalues.
The compute time scals as the inverse. And the physical chiral limit is determined by Chiral PT, which shows that the extrapolation is not linear.
Which again, computing power seems to have vastly superceded since I first heard about Lattice QCD (Moore’s law et al). So, obviously it cannot be linear, what exactly big O() is it?
For a fixed volume compute time scales roughly as the inverse 6th power of the lattice spacing, times the inverse of the lightest pion mass. 4 powers of the lattice spacing are just the number of points on the grid. One is critical slowing of the gluon update algorithm, and the remaing mass*spacing is the slowing of the quark matrix inversion.
In “real life” it’s even worse than this. But this sums it up pretty well.
“So you end up having to compute the determinant of a very large matrix.”
Yea I never quite understood this. Why is the matrix so large in the first place, here as opposed to elsewhere (where other lqcd calculations seem to be so reliable). Is it merely b/c of the smallness of the lattice spacing necessary that leads to such huge matrices? Also, it seems to me if you get good precision in the higher mass calculation, the scaling would naively appear to be linear in the eigenvalues. Which again, computing power seems to have vastly superceded since I first heard about Lattice QCD (Moore’s law et al). So, obviously it cannot be linear, what exactly big O() is it?
What exactly is the nature of the problem in getting the up/down mass light enough?
Okay, in lattice QCD you write the fermion action as
S = \bar{\psi} M \psi
where M is some huge (but finite and totally well defined) matrix that depends on the gauge fields. Now you can’t really do grassman variables on the computer, so instead you perform the path integral over the fermions exactly, which gives you
det(M)
So you end up having to compute the determinant of a very large matrix. The major cost of doing this is computing the inverse of M. The cost of the algorithim you use to compute the inverse (conjugate gradiant) goes like the inverse of the smallest eigenvalue. And the smallest eigenvalue of the matrix M is the quark mass. So as you take the quark mass smaller and smaller, the cost of your simulation goes through the roof.
If one is calculating the meson and/or baryon spectrum with heavier quarks (ie. charm, bottom, etc …), is it reasonable to treat them non-relativisticly?
For the bottom quarks the answer is yes. A non-relativistic effective theory is what we use. For the charm it’s somewhat less clear. The charm is “heavy” but it’s not quite “heavy enough” to let you trust non-relativistic expansions in the same way you trust them for the b quark.
It still worries me if the calculation is about mass splittings instead of absolute.
That is entirely due to the fact that the chiral perturbation theory for the absolute masses still needs doing. The simulations don’t care one way or the other. The omega mass was computed, see the paper by Davies and Bernard.
Indeed it is already a bit regrettable the need to recourse to chiral extrapolation there. In some sense, we are not testing QCD anymore but QCD plus the effective theory we are expected to fit to.
I disagree. Chiral perturbation theory, just like heavy quark effective theory, is a consquence of QCD. Using fits based on chiral perturbation theory is a test of full QCD.
Or is the neutral pion one of these problematic low mass objects?
That I don’t know actually. One rarely makes a distinction between the charged and neutral pions.
The \eta’ is hard though, I know that.
You tune the 4 bare quark masses (up and down are degenerate, and you ignore the top) and the bare lattice spacing to reproduce 5 experimental numbers
Hmm I have taken a look to the paper; the small error in 5 experimental numbers (plus the four ones “exact” from tuning) are impressive, really more impressive than quenched approximations. It still worries me if the calculation is about mass splittings instead of absolute.
The light quark masses are the key problem in the modern simulations.
Indeed it is already a bit regrettable the need to recourse to chiral extrapolation there. In some sense, we are not testing QCD anymore but QCD plus the effective theory we are expected to fit to.
Acutally calculating meson masses is easy.
In surprises me that also the width for QCD-stable objects seems to be easy to calculate. In particular for the pseudoscalar, this means that the chiral anomaly is handled correctly, does it? Or is the neutral pion one of these problematic low mass objects?
Matt,
What exactly is the nature of the problem in getting the up/down mass light enough?
If one is calculating the meson and/or baryon spectrum with heavier quarks (ie. charm, bottom, etc …), is it reasonable to treat them non-relativisticly? One would guess that mesons and/or baryons with light quarks would have to be treated relativistically.
The results of quenching QCD, I dont know if it is a cause of optimism or the contrary. It seems a bit of everything goes.
Modern large scale simulations are unquenched. The paper I mentioned is an unquenched calculation.
Hmm but, besides a fundamental mass, how many free parameters do you need
You tune the 4 bare quark masses (up and down are degenerate, and you ignore the top) and the bare lattice spacing to reproduce 5 experimental numbers. That’s it, there are no “free parameters” beyond that. This is not a model.
Are the tuned values published elshewhere?
Yes, you’d have to dig a bit through the references. The papers by the MILC people (Bernard et. al.) are the relevent ones for light mesons.
could you inform us of which other groups are obtaining sucessful predictions for the hadronic spectrum?
Well, every group does the meson spectrum to some degree, if only to fix parameters. Large scale calculations are being done by the CP-PACS group, see hep-lat/0409124 for example. It’s not clear if they will be able to get to light enough quark masses though.
The light quark masses are the key problem in the modern simulations. That’s where the difficulty is, getting the up/down mass light enough. Acutally calculating meson masses is easy.
The results of quenching QCD, I dont know if it is a cause of optimism or the contrary. It seems a bit of everything goes. After all, also the purely kinematical cross section of e+ e- —> muons is within a ten percent of the QED prediction, isnt it?
3 * mass_cascade – mass_nucleon
agrees with experiment at the few percent level. The paper hep-lat/0304004 has some other things. We’ve computed a few other light hadron quantities since then, the \Omega mass, for example.
Hmm but, besides a fundamental mass, how many free parameters do you need, and how sensitive the results are to veriations of these parameters? In 0304004, all the quark masses are tuned to reproduce the most important mass predictions (Are the tuned values published elshewhere? ).
And now you are here, I wonder… could you inform us of which other groups are obtaining sucessful predictions for the hadronic spectrum?
Regarding Lattice QCD,
The CP-PACS group reproducde the light hadron spectrum to within around 10% using quenched (i.e. not entirely physical) lattice QCD in the mid-nineties. There’s a plot of this, which is what you probably saw in Wilczek’s talk. For modern full QCD simulations we’re not quite at the full hadron spectrum, but some things are done. In order to reduce systematic errors one often computes mass differences, for example, the combination
3 * mass_cascade – mass_nucleon
agrees with experiment at the few percent level. The paper hep-lat/0304004 has some other things. We’ve computed a few other light hadron quantities since then, the \Omega mass, for example.
It’s better in the heavy quark sector. Apart from a couple of lingering problems, the charm and bottom meson spectrum has been totally computed, and agrees with experiments. There are also calculations of various decay constants and form factors, which are a bit harder than masses.
My opinion, I think it is fair to say that the meson spectra has not been calculated.
You’re wrong, as even a cursory glance at the lattice liturature would show.
Note also we are speaking of unstable particles, so the “pole mass” spectra includes both mass and decay width
Many particles are stable, for example, you can get very clean pion masses. You are correct that unstable particles (such as the \rho) are harder, but it can be done.
Montecarlo errors are high, finite size effects etc
Actually, those errors are fairly well under control for spectrum calculations. What really bites you is discretization errors, and errors in the chiral extrapolations.
Of course, when you try to do harder things (glueball masses, K \to (2,3)\pi) statistics and finite volume are much worse. But for meson masses, it’s pretty well under control.
Hi JC,
I believe the calculation Robert mentions for K a Calabi-Yau gives a simple result: the low mass particles are massless and supersymmetry is unbroken. For obvious reasons this result is not heavily promoted.
A problem he doesn’t mention is how to give dynamics to the moduli parameters for the compactification manifold. Naively the effective action doesn’t depend on them so you end up with massless scalars, less naively you can believe in fixing them a la KKLT, then you have the landscape to deal with.
Robert,
I’m familiar with what the string people do to obtain an effective action of the sort you have described. What I’m skeptical about concerning this particular effective action procedure, is that I have never seen anybody producing a convincing calculation of the quark and lepton masses in this manner which agrees with the experimental data.
Also what criteria is used to select the K in the 10K spacetime product R^4 x K, besides just trying out many different K’s (ie. a torus, K3, Calabi-Yau, etc … which may or may not break some of the SUSY)? With 10^100 or more possible Calabi-Yau’s (depending on who you ask), what criteria is used to choose one besides trying out every single one in an exhuastive manner? What would happen if 10^20 or 10^30 of the possible Calabi-Yau’s all produce similar quark and lepton mass spectrums to within the experimental error bars? (Though it would be very impressive if exactly ONE Calabi-Yau actually could produce the correct quark and lepton mass spectrum, with all other Calabi-Yau’s excluded).
For the record, Tony Smith points out to me that his prediction for the neutrino mass and mixing angles are at:
http://www.valdostamuseum.org/hamsmith/snucalc.html#asno
and he also has predictions for the top quark and other quark masses at his web site.
I don’t have any objections to people posting here a simple link to their predictions of this kind, but keep in mind that I don’t want this weblog to be used as a forum for discussing the details of these arguments.
p-Adic thermodynamics for super-Virasoro generator L_0 taking essentially the role of mass squared operator provides an alternative view about elementary particle and hadron mass spectra explaining fundamental mass scales number theoretically. Also CKM matrix can be deduced to a high degree from number theoretic constraints. Since the possible Higgs contributes only a small shift to fermion masses, the production cross section for Higgs can be by a factor of order 1/100 lower than in standard model.
The five chapters in the second part of “TGD and p-Adic Numbers” contain the detailed calculations.
Matti Pitkanen
My opinion, I think it is fair to say that the meson spectra has not been calculated. Note also we are speaking of unstable particles, so the “pole mass” spectra includes both mass and decay width. But even mass alone, from QCD plus lattice, is not really got. Montecarlo errors are high, finite size effects etc.
Tony Smith can also compute the mass of a proton. And this at tree level. Which is remarkable for a composite object in a strongly coupled theory. Or numerology.
Seriously, it is much more convincing to argue that the mass scale of the proton is given by the QCD scale (where the running coupling is of order unity). This gives at least the right ballpark. Anything more precise requires really hard work.
Re particle masses from string theory: This comes from the effective action. Assume you have a massless scalar field in 10D. It obeys some wave equation like Box phi = 0. If your 10D space-time is the product of R^4 x K for some compact six manifold K, you can look for solutions of the Schroedinger type equation Laplacian psi = k psi on K. For such an eigenvalue k make take the original phi to be phi = psi(K) x f(R^4) where I indicated the dependance on the coordinates in parenthesis. Then the above wave equation implies that in R^4, f obeys the Klein Gordon equation for mass squared m^2 = k.
This works not only for scalar fields and thus once you know the K and the spectrum of the Laplacian on it, you know the masses of particles in 4D. The problem to identify the correct K for the real world remains.
Tony Smith can calculate particle masses as ratios of volumes of (homogeneous symmetric) spaces. (But, I can’t get him to write down an integral I can do.)
He says the top quark is something like 138 GeV.
-drl
Peter,
I vaguely remember in the 80’s some people making claims of attempting (in principle) to calculate various “particle” masses using string theory. So far I haven’t seen any convincing results of anybody being able to do this successfully. I could never get anybody to explain precisely how to get these “particle” masses. Hopefully they weren’t referring directly to the Regge trajectories stuff from the late 60’s/early 70’s, which don’t seem to be very convincing.
Hi JC,
I don’t have any references at hand, but I’ve certainly seen papers and talks (e.g. Wilczek’s at the Sidneyfest) where people have shown the results of impressive lattice qauge theory calculations of the masses of these states that come out correct to within the errors expected in the Monte-Carlo calculations. The size of the errors I vaguely recall as being in the range of at most a few percent.
So the fact that you can compute these hadronic masses from first principles makes numerology of them pretty pointless. Of course no one knows how to calculate quark masses from first principles, so there’s lots of room for speculation there. But please don’t do that here….
Peter,
(Slightly offtopic).
Have the lattice gauge researchers been able to calculate the spectrum for any meson and/or baryon families yet, with a precision that makes it possible to compare with the experimental data? If this is possible to do, I would guess it would possibly shut-up most of the people who look at the “numerology” particle masses.
The difference between the old 178 number and the new 174 number is not statistically significant. It’s very hard to measure this mass accurately since you have to accurately measure the energy contained in a jet. Please spare us further numerology involving quark or lepton masses, since these have all been at least approximately known for a while. Now if someone has a plausible prediction of the Higgs mass or absolute neutrino masses, they should get that on the record since during the next few years these things may finally get measured.
Peter:
Any comment on the fact that a certain John Martin predicted the 174GeV figure correctly one year ago? He made a very good point that the electron mass is a very important fundamental mass unit. That is in line with my assertion in QUITAR that electron mass equals to alpha times the fundamental mass unit, which is about 70MeV.
Quantoken
Pingback: Not Even Wrong » Blog Archive » New Top Quark Mass