Dear Rajamani Narayanan
Senior Assistant to the Editor
Physical Review D
1 Research Road, Box 9000, Ridge, NY 11961-9000.
1 Nov 2001
FROM: Warren D. Smith (Research Staff; Library Committee Chair)
NEC Research Institute (Room 2B05) 4 Independence Way Princeton NJ 08540 USA
Phone: (609) 951-2717 Web: http://www.neci.nj.nec.com/homepages/wds
Email: wds@research.NJ.NEC.COM Fax: (609) 951-2488 ("for W.D.Smith 2B05")
----------------------------------------------------------------
This concerns my manuscript ``Zero point energies, the cosmical constant,
and supersymmetry'' (df8058) by Warren D. Smith (me).
It was rejected by a referee for Phys Rev D.
I will now argue that it probably should have been
accepted. It would be nice if the referee dropped his (her)
anonymity so we could correspond directly by email.
I think the referee is in many ways excellent, and
certainly considerably more expert than I am. However,
I still think the referee's arguments for rejecting my
paper seem to be invalid.
I am enclosing the entire ref report verbatim at bottom.
The ref's arguments basically can be summarized as
(1) Smith has merely rediscovered old (1970s and 1980s) ideas.
(2) Those ideas cannot work to solve the Cosmical Constant Crisis
(CCC) - for reasons which also were known long ago.
:: Unoriginal + Can't work ==> reject.
But:
(1) I was already aware that the ideas the referee says were old,
were largely not new (and examination of the text of, and the
cites in, my manuscript clearly shows that). I was not as fully
aware as the ref, but I was aware. The PROBLEM is that the ref
apparently STOPPED READING my paper before he got to the GENUINELY
RADICALLY NEW ideas, namely the "spectral roughness" phenomenon
and the "feedback control" phenomenon.
(2) These new ideas are logically inescapable and overcome the
known problems preventing the previous ideas from working
to solve the CCC - it now can work, despite this previously being
thought impossible.
:: Original + Quite likely it can work ==> should NOT reject.
I have changed some sentences and retitled my manuscript in an effort
to make the referee's pathology less likely to reoccur. The revised
manuscript is enclosed and also available at
http://www.neci.nj.nec.com/homepages/wds/zeropoint.ps
which is accessible from my works page
http://www.neci.nj.nec.com/homepages/wds/works.html
---------------------------------------------------------------
Let me now go into more detail. As you can see in the
"conclusion" section of my manuscript, my approach to solving
the CCC has 7 steps, all of them logically inescapable.
The ref appears to have stopped reading
(or anyway not appreciated and not commented on at all) everything
after step 3, i.e. over 1/2 of my paper. Here are the 7 steps:
1&2. Fermions have negative zeropoint energies which could (and with
SUSY will) cancel the boson zeropoint energies enabling the CC to
be zero AT FIRST ORDER.
3. At SECOND ORDER (this even in broken SUSY) the "supertrace theorem"
or "sum rule" still forces CC=0.
As the referee correctly points out, these ideas were previously
known. As he also correctly says, at THIRD (i.e. "loop") order
in the asymptotic expansion of the CC in terms of the high
cutoff frequency, the supertrace theorem sum rule
probably is NOT true. Yes. I said all that in my manuscript, e.g. section 9
and e.g. my quote from Weinberg.
The referee further argues that if it were true, then
SUSY partner particles would exist with lower masses than normal particles
and should already have been seen. That sounds like an excellent
argument to me although I am too ignorant to understand some of what the
referee is saying [e.g. "breaking... happens in a sector very well
separated from the standard model (hidden sector)." What is
a "sector"? What does it mean for it to be "well separated?"].
He also says in certain recent models ("gauge mediation of supersymmetry
breaking") one can explicitly check that the resulting correction
to the supertrace formula is of order TeV^2. Again I am too stupid
to understand the paper he cites, but again this sounds like an
excellent point. [Basically I am extremely ignorant about SUSY theory,
but hopefully this does not matter much, since the main thrust of
my paper is about eigenvalues and spectral roughness, which are purely
mathematical things, while all one needs to know about SUSY for my
purposes are bare rudiments. The details of SUSY are not relevant
and my argument's validity should be independent of them.]
The referee then continues
"Thus for the standard model fields and their
superpartners, the loop corrections will be the LEADING contributions to
their masses. [Yes. I fully agree. I thought I'd said that. In fact I
had considerable discussion in section 5 of just at what order of
approximation one might expect equality to hold. The referee could
probably usefully contribute to this discussion.]
And these are the masses that are supposed to be of order
TeV (and as explained above, the masses of the hidden sector particles are
much heavier, and the TeV scale comes out as a loop suppressed version of
the real SUSY breaking scale). This way we see, that for the standard
model particles the sum rule is not at all obeyed, and the resulting
cosmological constant is of order TeV^4, which is still many orders of
magnitude higher than the expected (10^-3 eV)^4."
I agree. In short, violation of the sum rule at third (i.e. loop) order
happens and would lead to a vastly larger-than-true CC and should refute this
whole approach. BUT, if you move on to my step 4...
4. The third order terms in the asymptotic expansion of the CC
REALLY DO NOT EXIST, because they are SWAMPED by something I call
"spectral roughness." This was wholy never understood previously.
(Indeed, even the SECOND order terms would be swamped by the roughness
in some universe-models, such as the cubical one I most-use,
if they had not conveniently already been zero anyway;
see my section 12.1.)
To be poetic: imagine trying to find the height of the ocean - zero above
sea level (playing the role of the CC) by making a sequence of better and
better approximations (successive terms of an asymptotic series).
First you realize the answer is 0 plus or minus 1000 km.
Your next approximation is 0 +- 1 km.
Your next is NOT 0, but it is only 10 cm off. But... 10 cm is
still vastly larger than 1 Angstrom which is what is needed.
So you give up. Hopeless. BUT... the ocean is covered with
turbulent waves of height 1 meter (playing the role of
"spectral roughness")!! So you have succeeded it getting it
right on many many places on the ocean! And then it turns out there is
a weird global "feedback mechanism" which causes the ocean to
self-adjust so that only the places where you got it right, matter...
(End of poetry attempt).
Spectral roughness is quite well understood (as I explained) in
some model universes (such as cubes with periodic boundary
conditions) but not very well understood in others. Using this
understanding one can try to argue it ought to swamp the putative
third (loop) order terms in any putative asymptotic expansion of the CC.
I.e., those terms have no hope of being valid according to the usual
definitions of validity for an asymptotic series. So the worry that
those terms are NOT ZERO, DOES NOT MATTER. The fact that they are TeV
instead of meV DOES NOT MATTER. That all sounded like a crushing objection,
BUT it is not, because...
5,6,7: There is a feedback process (also explained) which now causes
the CC to "self-adjust" to the right places on the spectral roughness
so that CC=0 to VERY good approximation. Basically (in our
model of the universe as a parallelipiped), if the CC is too large,
then the size of the universe will shrink - extremely quickly - by
an - extremely tiny - amount, until we reach a point on the
curve (which is rough, not smooth)
of CC versus universe-size, where the CC is zero. The roughness
is so much larger than the third order terms that the fact those terms
are very nonzero, does not matter.
Furthermore, the large violations of CC=0 that the referee worries
so much about, actually cause my fedback-self-adjustment process to
work BETTER the larger they are! So in some sense the problem
solves itself.
OK? I hope you now realize that my two new ideas are very important.
Perhaps they even deserve the adjectives "revolutionary"
and "breakthrough." [The whole "rough" versus smooth mathematical
phenomenon is one that most or all physics theorists have, due to a
wrong mindset, basically totally ignored, and quite likely it comes
up elsewhere in physics too.] Perhaps on the other hand they are just
another flawed approach (as I explain, there are numerous things
still not understood about spectral roughness [and about SUSY] - and
the model universes in which it is understood, are too simplistic).
But if so, the referee has completely failed to find - and has not
even tried to find - that flaw.
So in my opinion the whole thing deserves publication so these new ideas
can be aired and so that other people CAN try to find the flaws
in it (or not) as the case may be - quite a lot more work will
be needed, I think, before the final verdict can be made, and
I do not think the referee (or any single human) is capable
of rendering that verdict now, before publication.
I hope you (now) agree!
PS. I would be happy to modify my paper to incorporate some
of the statements of, and cites made by, the referee (whom I would
be glad to credit, if I knew his/her name). (Indeed I have now put
in footnote 15 attributed to the referee.)
I'd also be happy to correspond via email with him/her.
(Hey! Just send me an email!)
Cheers.
-------------------------------------------------------------------------------
His (or her) Referee report, now repeated verbatim:
This paper attempts to solve the cosmological constant problem using
supersymmetry. The author does give a nice overview of supersymmetry and
the cosmological constant problem in the first part of the paper, however
none of this is new, and is therefore not appropriate for publication
in Physical Review D.
The solution that the author is presenting is a tantalizing possibility
that particle physicists have been tought
[sic; "have thought" was apparently meant]
about a long time ago. The point
that the author makes is that even in the presence of spontaneously broken
supersymmetries the supertrace of the masses of the particles still
vanishes, and this is exactly one of the quantities that appears in the
one loop expression for the cosmological constant. This is all known for a
long time, and unfortunately does not solve the cosmological constant
problem. In fact, these sum rules were exactly the ones that hampered for
a long time the development of a phenomenologically acceptable
supersymmetric version of the standard model. The problem is, that at tree
level, these sum rules hold representation by representation. So from this
one can show, that if these sum rules really hold, then one should have
already observed at least one superpartner, since the sum rule can hold
only if some of the superpartners become lighter than the ordinary
particles. A beautiful derivation of this result can be found in the
famous paper by Dimopoulos and Georgi, Nucl.Phys.B193:150,1981. This is
the reason why none of the early attempts by Fayet and others led to an
acceptable supersymmetric standard model. So in order to be able to build
such a model, one has to badly violate the sum rule in Eq. (16) of this
paper. How is this possible, if we agreed that the tree level result is
valid, and violated only by loop corrections? The way ALL realistic models
of supersymmetry breaking achieve this is by assuming, that the breaking
scale of supersymmetry is very high, but it happens in a sector very well
separated from the standard model (hidden sector). In this sector, the
supertrace formula (16) is very well satisfied. The standard model only
learns about supersymmetry breaking by loop effects (either gravitational
loops or gauge loops). Thus for the standard model fields and their
superpartners, the loop corrections will be the LEADING contributions to
their masses. And these are the masses that are supposed to be of order
TeV (and as explained above, the masses of the hidden sector particles are
much heavier, and the TeV scale comes out as a loop suppressed version of
the real SUSY breaking scale). This way we see, that for the standard
model particles the sum rule is not at all obeyed, and the resulting
cosmological constant is of order TeV^4, which is still many orders of
magnitude higher than the expected (10^-3 eV)^4. Therefore, a realistic
supersymmetric model after susy breaking can not itself resolve the
cosmological constant problem. For example, in one of the most popular
recent models called gauge mediation of supersymmetry breaking one can
explicitly check that the resulting correction to the supertrace formula
is of order TeV^2. A nice reference for these models is
G.F. Giudice, R. Rattazzi Phys.Rept.322:419-499,1999.
However, I should add that there are several attempts along the lines
that the author is suggesting to solve the cosmological constant and the
hierarchy problems, that is by cancelling the supertraces in a theory
WITHOUT requiring supersymmetry. The nicest description of these attempts
can be found in the paper Keith R. Dienes hep-ph/0104274.
In summary, I find that even though the paper is a nice attempt at solving
the cosmological constant problem, since the concepts in the paper are not
new, and since as explained above the mechanism the author is trying to
argue for to solve the cosmological constant problem can not possibly work
in a realistic supersymmetric model, I recommend not to publish this paper
in Physical Review D.
-------------------------------------------------------------------------