jeudi 4 décembre 2014

D'un Noël à l'autre le nombre de neutrinos effectifs se rapproche de celui des Rois Mages

Noël 2012 
Just before Christmas [2013], the WMAP collaboration posted the 9-years update of their Cosmic Microwave Background [CMB] results...
The effective number of relativistic degrees of freedom at the time of CMB decoupling, the so-called Neff parameter, is now Neff = 3.26 ± 0.35 3.84 ± 0.40, compared to Neff= 4.34 ± 0.87 quoted in the 7-years analysis. For the fans and groupies of this observable it was like finding a lump of coal under the christmas tree...

So, what is this mysterious Neff parameter? According to the standard cosmological model, at the temperatures above 10 000 Kelvin the energy density of the universe was dominated by a plasma made of neutrinos (40%) and photons (60%). The photons today make the CMB about which we know everything. The neutrinos should also be around, but for the moment we cannot study them directly. However we can indirectly infer their presence in the early universe via other observables. First of all, the neutrinos affect the energy density stored in radiation... which controls the expansion of the Universe during the epoch of radiation domination. The standard model predicts Neff equal to the number of known neutrinos species, that is Neff=3 (in reality 3.05, due to finite temperature and decoupling effects). Thus, by measuring how quickly the early Universe was expanding, we can determine Neff. If we find Neff≈3 we confirm the standard model and close the store. On the other hand, if we measured that Neff is significantly larger than 3, that would mean a discovery of additional light degrees of freedom in the early plasma that are unaccounted for in the standard model. Note that these new hypothetical particles don't have to be similar to neutrinos, in particular they could be bosons, and/or have a different temperature (in which case they would correspond to non-integer increase of Neff). All that is required from them is that they are weakly interacting and light enough to be relativistic at the time of CMB decoupling. Theorists have dreamed up many viable candidates that could show up in Neff : additional light neutrinos species, axions, dark photons, etc... 
The interest of particle physicists in Neff come from the fact that, until recently, the CMB data also pointed at Neff≈4 with a comparable error. The impact of Neff on the CMB is much more contrived, and there are many separate effects one needs to take into account. For example, larger Neff delays the moment of matter-radiation equality, which affects the relative strength and positions of the peaks. Furthermore, Neff affects how the perturbations grow during the radiation era, which may show up in the CMB spectrum at l ≥ 100. Finally, the larger Neff, the larger is the effect of Silk damping at l ≥ 1000. Each single observable has a large degeneracy with other input parameters (matter density, Hubble constant, etc.) but, once the CMB spectrum is measured over a large range of angular scales, these degeneracies are broken and stringent constraints on Neff can be derived. That is what happened recently, thanks to the high-l CMB measurements from the ACT and SPT telescopes, and some input from other astrophysical observations. The net result [Neff = 3.84 ± 0.40] ... using [the CMB data] in addition [with] an input from Baryon Acoustic Oscillations and Hubble constant measurements... can be well accounted for by the three boring neutrinos of the standard model.
Jester, Friday, 18 January 2013
Noël 2014
Les nouveaux résultats de la collaboration Planck portent aussi sur un autre type de particules très élusives : les neutrinos. Ces particules élémentaires « fantômes », produites en abondance dans le Soleil par exemple, traversent notre planète pratiquement sans interaction, ce qui rend leur détection extrêmement difficile. Il n’est donc pas envisageable de détecter directement les premiers neutrinos, produits moins d’une seconde après le Big-Bang, qui sont extrêmement peu énergétiques. Pourtant, pour la première fois, Planck a détecté sans ambiguïté l’effet de ces neutrinos primordiaux sur la carte du rayonnement fossile.

Les neutrinos primordiaux décelés par Planck ont été libérés une seconde environ après le Big-Bang, lorsque l’univers était encore opaque à la lumière mais déjà transparent à ces particules qui peuvent s’échapper librement d’un milieu opaque aux photons, tel que le cœur du Soleil. 380 000 ans plus tard, lorsque la lumière du rayonnement fossile a été libérée, elle portait l’empreinte des neutrinos car les photons ont interagi gravitationnellement avec ces particules. Ainsi, observer les plus anciens photons a permis de vérifier les propriétés des neutrinos.
PRÉLIMINAIRE - Contraintes et lien entre le nombre d’espèces de neutrinos, la vitesse d’expansion de l’univers aujourd’hui H0 et le paramètre σ8 qui caractérise la structuration de la matière à grande échelle. Les points de couleur correspondent aux contraintes température + effet de lentille gravitationnelle uniquement, les contours noirs en ajoutant la polarisation à toutes les grandes échelles angulaires et les oscillations acoustiques de baryons. Les lignes verticales correspondent à la valeur de Neff prédite par divers modèles : la ligne pleine correspond au modèle standard, les lignes pointillées à des modèles avec une quatrième espèce de neutrino (selon le type de neutrino, actif ou stérile, et l'époque de leur découplage). © ESA - collaboration Planck
Les observations de Planck sont conformes au modèle standard de la physique des particules. Elles excluent quasiment l’existence d’une quatrième famille de neutrinos auparavant envisagée d’après les données finales du satellite WMAP, le prédécesseur américain de Planck. Enfin, Planck permet de fixer une limite supérieure à la somme des masses des neutrinos, qui est à présent établie à 0.23 eV (électronvolt).
Les données de la mission complète et les articles associés qui seront soumis à la revue Astronomy & Astrophysics (A&A) seront disponibles dès le 22 décembre 2014 sur le site de l’ESA. Ces résultats sont notamment issus des mesures faites avec l’instrument haute fréquence HFI conçu et assemblé sous la direction de l’Institut d’astrophysique spatiale (CNRS/Université Paris-Sud) et exploité sous la direction de l’Institut d’astrophysique de Paris (CNRS/UPMC) par différents laboratoires impliquant le CEA, le CNRS et les universités, avec des financements du CNES et du CNRS.
Communiqué de presse du CNRS, Lundi, 1 Décembre 2014

Premier Noël 
...il n’est précisé nulle part dans la Bible le nombre de ces « Rois » mages, et encore moins leur nom! Cela reste donc sujet à interprétation suivant les auteurs: ils sont seulement deux sur les ornements muraux des catacombes de Saint-Pierre, trois dans les catacombes de Priscille ou quatre dans les catacombes de Domitille. La tradition syrienne considère même qu’il étaient au nombre de douze! ... 

Pourtant, au fil des siècles, la coutume tend à les considérer au nombre de trois… Pourquoi? Tout simplement parce que l’Évangile de Matthieu mentionne l’existence de trois cadeaux donnés à Jésus: l’or (symbole de la royauté – les Rois mages voyaient en Jésus-Christ le futur roi des Juifs...), l’encens (symbole de la divinité) et la myrrhe (très employée dans les rites d’embaumement, elle symbolise l’humanité de Jésus – même si cette interprétation ne fait pas l’unanimité)... 

Les noms de Melchior, Gaspard et Balthazar apparaissent pour la première fois au VIe siècle après Jésus Christ dans un Évangile apocryphe... Mais il y a pire! Les « Rois-Mages » n’étaient en réalité pas rois! Ils étaient seulement mages, c’est-à-dire spécialistes d’astronomie et de divination.
Les Rois Mages n’étaient pas trois. D'ailleurs, ils n'étaient même pas rois...
Djinnzz, le 16/07/2013

mercredi 3 décembre 2014

Pourquoi le blogueur Jester (physicien des particules) est-il si (raisonnablement) "méchant" (avec ses collègues astrophysiciens)?

Hypothèse 1 : parce qu'il sait que toute preuve expérimentale est probablement fausse (sans estimation correcte de son incertitude) jusqu'à preuve du contraire (sa reproductibilité)
There indeed seems to be an excess in the 2-4 GeV region. However, given the size of the error bars and of the systematic uncertainties, not to mention how badly we understand the astrophysical processes in the galactic center, one can safely say that there is nothing to be excited about for the moment. 

resonaances.blogspot.fr/2009/11/fermi-says-nothinglike-sure-sure.html 

It is well known that sigmas come in varieties: there or more significant 3 sigmas, less significant 3 sigmas, and astrophysical 3 sigmas. 

http://resonaances.blogspot.fr/2011/04/another-3-sigma-from-cdf.html 

Notice that different observations of the helium abundance are not quite consistent with each other, but that's normal in astrophysics; the rule of thumb is that 3 sigma uncertainty in astrophysics is equivalent to 2 sigma in conventional physics. 

http://resonaances.blogspot.fr/2013/01/how-many-neutrinos-in-sky.html 

Although the natural reaction here is a resounding "are you kidding me", the claim is that the excess near 3.56 keV ...  over the background model is very significant, at 4-5 astrophysical sigma. It is difficult to assign this excess to any known emission lines from usual atomic transitions. If the excess is interpreted as a signal of new physics, one compelling (though not unique) explanation is in terms of sterile neutrino dark matter. In that case, the measured energy and intensity of the line correspond to the the neutrino mass 7.1 keV and the mixing angle of order 5*10^-5, see the red star in the plot. This is allowed by other constraints and, by twiddling with the lepton asymmetry in the neutrino sector, consistent with the observed dark matter relic density.
Clearly, a lot could possibly go wrong with this kind of analysis. For one thing, the suspected dark matter line doesn't stand alone in the spectrum. The background mentioned above consists not only of continuous X-ray emission but also of monochromatic lines from known atomic transitions. Indeed, the 2-10 keV range where the search was performed is pooped with emission lines: the authors fit 28 separate lines to the observed spectrum before finding the unexpected residue at 3.56 keV. The results depend on whether these other emission lines are modeled properly. Moreover, the known Ar XVII dielectronic recombination line happens to be nearby at 3.62 keV. The significance of the signal decreases when the flux from that line is allowed to be larger than predicted by models. So this analysis needs to be confirmed by other groups and by more data before we can safely get excited.
 
http://resonaances.blogspot.fr/2014/02/signal-of-neutrino-dark-matter.html



Hypothèse 2 : Parce qu'il est un peu las de ne pas trouver dans son laboratoire la nouvelle physique (que les autres voient déjà dans leur observatoire)
There is no evidence of new physics from accelerator experiments (except, perhaps, for the 3-3.5 σ discrepancy of the muon (g-2) 7, 8)). Most of the experimental evidence for new physics comes from the sky like for Dark Energy, Dark Matter, baryogenesis and also neutrino oscillations (that were first observed in solar and atmospheric neutrinos). One expected new physics at the electroweak scale based on a ”natural” solution of the hierarchy problem 4). The absence so far of new physics signals casts doubts on the relevance of our concept of naturalness. 
(Submitted on 8 Jul 2014 (v1), last revised 17 Jul 2014 (this version, v2))



mardi 25 novembre 2014

Il faut sauver le physicien John Bell (des griffes d'un blogueur polémiste)

Une réaction personnelle à un billet de Lubos Motl  
J. Bell : a mediocre physicist ? Do you talk about the same guy who discovered with R. Jackiw (and independently S. Adler) chiral anomaly, such an important phenomenon in quantum field theory? I can agree with all your technical arguments to support QM against classical zealots but the pedagogical value of your post would be undermined in my opinion if you would not recognize the pedagogical usefulness of the Bell's theorem and if you would not make the difference between necessarily old-fashioned conceptions or terminology used by Bell in the sixties and the loosely-defined concepts of a large number of QM's contenders nowadays.

John Bell et sa plus grande contribution à la physique 
... John Bell codiscovered the mechanism of anomalous symmetry breaking in quantum field theory. Indeed, our paper on this subject is his (and my) most-cited work. The symmetry breaking in question is a quantum phenomenon that violates the correspondence principle; it arises from the necessary infinities of quantum field theory. Over the years it has become evident that theoretical/mathematical physicists are not the only ones to acknowledge this effect. Nature makes fundamental use of the anomaly in at least two ways: the neutral pion’s decay into two photons is controlled by the anomaly [1, 2] and elementary fermions (quarks and leptons) arrange themselves in patterns such that the anomaly cancels in those channels to which gauge bosons – photon, W, Z – couple [3]. (There are also phenomenological applications of the anomaly to collective, as opposed to fundamental, physics – for example, to edge states in the quantum Hall effect.)
R. Jackiw, november 2000

Le mot de la fin à Richard Feynman et Alain Aspect
Chaque fois que l’on se replonge dans le problème que nous venons de présenter, on ne peut s’empêcher de se poser la question : y a-t-il un problème réel ? Il faut reconnaître que la réponse à cette question peut varier, même pour les plus grands physiciens. En 1963, R. Feynman donnait une première réponse à cette question dans son fameux cours de physique48 : « Ce point ne fut jamais accepté par Einstein… il devint connu sous le nom de paradoxe d’Einstein-Podolsky-Rosen. Mais lorsque la situation est décrite comme nous l’avons fait ici, il ne semble pas y avoir quelque paradoxe que ce soit … ». Deux décennies plus tard, Feynman exprimait une opinion radicalement différente, toujours sur la situation EPR : « nous avons toujours eu une très grande difficulté à comprendre la vision du monde que la Mécanique Quantique implique … Il ne m’est pas encore apparu évident qu’il n’y ait pas de problème réel… Je me suis toujours illusionné moi même, en confinant les difficultés de la Mécanique Quantique dans un recoin de plus en plus petit, et je me retrouve de plus en plus chagriné par ce point particulier. Il semble ridicule de pouvoir réduire ce point à une question numérique, le fait qu’une chose est plus grande qu’une autre chose. Mais voilà : – elle est plus grande … »
Alain Aspect
We must be grateful to John Bell for having shown us that philosophical questions about the nature of reality could be translated into a problem for physicists, where naive experimentalists can contribute. 
Alain Aspect (Submitted on 2 Feb 2004)



vendredi 31 octobre 2014

Faites moi peur (ou Vous voulez me rassurez) ...

//Le blogueur fête aujourd'hui Halloween à sa manière en convoquant quelques physiciens qui n'hésitent pas à parler du scénario cauchemardesque (nightmare scenario) de la physique des hautes énergies, à savoir pas de phénomènes au delà de ceux prévus par le Modèle Standard observables au LHC. Le but est évidemment de se rassurer en montrant que ses mêmes physiciens réfléchissent sur ce qui pourrait faire avancer leur discipline.


... Monsieur Shifman
String theory appeared as an extension of the dual resonance model of hadrons in the early 1970, and by mid-1980 it raised expectations for the advent of “the theory of everything” to Olympic heights. Now we see that these heights are unsustainable. Perhaps this was the greatest mistake of the string-theory practitioners. They cornered themselves by promising to give answers to each and every question that arises in the realm of fundamental physics, including the hierarchy problem, the incredible smallness of the cosmological constant, and the diversity of the mixing angles. I think by now the “theory-of-everything-doers” are in disarray, and a less formal branch of string theory is in crisis [a more formal branch evolved to become a part of mathematics or (in certain occasions) mathematical physics]. 
At the same time, leaving aside the extreme and unsupported hype of the previous decades, we should say that string theory, as a qualitative extension of field theory, exhibits a very rich mathematical structure and provides us with a new, and in a sense superior, understanding of mathematical physics and quantum field theory. It would be a shame not to explore this structure. And, sure enough, it was explored by serious string theorists. 
The lessons we learned are quite illuminating. First and foremost we learned that physics does not end in four dimensions: in certain instances it is advantageous to look at four dimensional physics from a higher-dimensional perspective... A significant number of advances in field theory, including miracles in N = 4 super-Yang-Mills... came from the string-theory side...
... since the 1980s Polyakov was insisting that QCD had to be reducible to a string theory in 4+1 dimensions. He followed this road... arriving at the conclusion that confinement in QCD could be described as a problem in quantum gravity. This paradigm culminated in Maldacena’s observation (in the late 1990’s) that dynamics of N=4 super-Yang- Mills in four dimensions (viewed as a boundary of a multidimensional bulk) at large N can be read off from the solution of a string theory in the bulk... 
Unfortunately (a usual story when fashion permeates physics), people in search of quick and easy paths to Olympus tend to overdo themselves. For instance, much effort is being invested in holographic description in condensed matter dynamics (at strong coupling). People pick up a supergravity solution in higher dimensions and try to find out whether or not it corresponds to any sensible physical problem which may or may not arise in a condensed matter system. To my mind, this strategy, known as the “solution in search of a problem” is again a dead end. Attempts to replace deep insights into relevant dynamics with guesses very rarely lead to success.
(Submitted on 31 Oct 2012 (v1), last revised 22 Nov 2012 (this version, v3))

... Monsieur White
In his overview talk[1] at Strings 2013, David Gross discussed the “nightmare scenario” in which the Standard Model Higgs boson is discovered at the LHC but no other new short-distance physics, in particular no signal for SUSY, is seen. He called it the “extreme pessimistic scenario” but also said it was looking more and more likely and (if it is established) then, he acknowledged
“We got it wrong.” “How did we misread the signals?” “What to do?”.
He said that if it comes about definitively the field, and string theorists in particular, will suffer badly. He said that it will be essential for theorists who entered the field most recently to figure out where previous generations went wrong and also to determine what experimenters should now look for.
In the following, I will argue that a root cause has been the exaggeration of the significance of the discovery of asymptotic freedom that has led to the historically profound mistake of trying to go forward by simply formulating new short-distance theories, supersymmetric or otherwise, while simultaneously ignoring both deep infra- red problems and fundamental long-distance physics.
In his recent “Welcome” speech[2] at the Perimeter Institute, Neil Turok expressed similar concerns to those expressed by Gross. He said that
“All the {beyond the Standard Model} theories have failed ... Theoretical physics is at a crossroads right now ... {there is} a very deep crisis.”
He argued that nature has turned out to be simpler than all the models - grand unified, super-symmetric, super-string, loop quantum gravity, etc, and that string theorists, especially, are now utterly confused - with no predictions at all. The models have failed, in his opinion, because they have no new, simplifying, underlying principle. They have complicated the physics by adding extra parameters, without introducing any simplifying concepts.

(Submitted on 5 Jun 2014)

vendredi 24 octobre 2014

De l'art de mesurer la constante de Hubble en cherchant notre place au milieu de nulle part

La longue marche vers une "cosmologie de précision"

The plots below show the time evolution of our knoweldge of the Hubble Constant H0, the scaling between radial velocity and distance in kilometers per second per Megaparsec, since it was first determined by Lemaitre, Robertson and Hubble in the late 1920's. The first major revision to Hubble's value was made in the 1950's due to the discovery of Population II stars by W. Baade. That was followed by other corrections for confusion, etc. that pretty much dropped the accepted value down to around 100 km/s/Mpc by the early 1960's.




The last plot shows modern (post Hubble Space Telescope) determinations, including results from gravitational lensing and applications of the Sunyaev-Zeldovich effect. Note the very recent convergence to values near 65 +/- 10 km/sec/Mpc (about 13 miles per second per million light-years)... Currently, the old factor of two discrepancy in the determination of the cosmic distance scale has been reduced to a dispersion of the order of 10 km/s out of 65-70, or 15-20%. Quite an improvement!
One major additional change in the debate since the end of the 20th century has been the discovery of the accelerating universe (cf. Perlmutter et al. 1998 and Riess et al. 1998) and the development of "Concordance" Cosmology. In the early 1990's, one of the strongest arguments for a low (~50 km/s/Mpc) value of the Hubble Constant was the need to derive an expansion age of the universe that was older than, now, the oldest stars, those found in globular star clusters. The best GC ages in 1990 were in the range 16-18 Gyr. The expansion age of the Universe depends primarily on the Hubble constant but also on the value of various other cosmological parameters, most notably then the mean mass density over the closure density, ΩM. For an "empty" universe, the age is just 1/H0 or 9.7 Gyr for H0=100 km/s/Mpc and 19.4 Gyr for 50 km/s/Mpc. For a universe with ΩM=1.000, the theorist's favorite because that is what is predicted by inflation, the age is 2/3 of that for the empty universe. So if the Hubble Constant was 70 km/s/Mpc, the age of an empty universe was 13.5 Gyr, less than the GC ages, and if Ωwas 1.000 as favored by the theorists, the expansion age would only be 9 Gyr, much much less than the GC ages. Conversely if H0 was 50 km/s/Mpc, and ΩM was the observers' favorite value of 0.25, the age came out just about right. Note that this still ruled out ΩM= 1.000 though, inspiring at least one theorist to proclaim that H0 must be 35! The discovery of acceleration enabled the removal of much of this major remaining discrepancy in timescales, that between the expansion age of the Universe and the ages of the oldest stars, those in globular clusters. The introduction of a Cosmological constant, &Lambda, one of the most probable causes for acceleration, changes the computation of the Universe's expansion age. A positive ΩΛ increases the age. The Concordance model has an H0=72 km/s/Mpc, an Ω= 1.0000... made up of ΩΛ=0.73 and ΩM=0.27. Those values yield an age for the Universe of ~ 13.7 Gyr. This alone would not have solved the timescale problem, but a revision of the subdwarf distance scale based on significantly improved paralaxes to nearby subdwards from the ESA Hiparcos mission, increased the distances to galactic globular clusters and thus decreased their estimated ages. The most recent fits of observed Hertzsprung-Russel diagrams to theoretical stellar models (isochrones) by the Yale group (Demarque, Pinsonneault and others) indicates that the mean age of galactic globulars is more like 12.5 Gyr, comfortably smaller than the Expansion age.
John P. Huchra, Copyright 2008
Les derniers pas...
The recent Planck observations of the cosmic microwave background (CMB) lead to a Hubble constant of H0=67.3±1.2 km/s/Mpc for the base six-parameter ΛCDM model (Planck Collaboration 2013, hereafter P13). This value is in tension, at about the 2.5σ level, with the direct measurement of H0=73.8 ± 2.4 km/s/Mpc reported by Riess et al (2011 R11). If these numbers are taken at face value, they suggest evidence for new physics at about the 2.5σ level (for example, exotic physics in the neutrino or dark energy sectors...). The exciting possibility of discovering new physics provides strong motivation to subject both the CMB and H0 measurements to intense scrutiny. This paper presents a reanalysis of the R11 Cepheid data. The  H0 measurement from these data has the smallest error and has been used widely in combination with CMB measurements for cosmological parameter analysis (e.g. Hinshaw et al. 2012; Hou et al. 2012; Sievers et al. 2013). The study reported here was motivated by certain aspects of the R11 analysis: the R11 outlier rejection algorithm (which rejects a large fraction, ∼ 20%, of the Cepheids), the low reduced χ2 values of their fits, and the variations of some of the parameter values with different distance anchors, particularly the metallicity dependence of the period-luminosity relation... 
[The] figure [below] compares these two estimates of H0 with the P13 results from the [Planck+WP+highL (ACT+South Pole Telescope)+BAO (2dF Galaxy Redshift and SDSS redshiftsurveys)] likelihood for the base ΛCDM cosmology and some extended ΛCDM models. I show the combination of CMB and Baryon Acoustic Oscillations [BAO] data since H0 is poorly constrained for some of these extended models using CMB temperature data alone. (For reference, for this data combination H0=67.80±0.77 km/s/Mpc in the base ΛCDM model.) The combination of CMB and BAO data is certainly not prejudiced against new physics, yet the H0 values for the extended ΛCDM models shown in this figure all lie within 1σ of the best fit value for the base ΛCDM model. For example, in the models exploring new physics in the neutrino sector, the central value of H0 never exceeds 69.3 km/s/Mpc. If the true value of H0 lies closer to, say, H0=74 km/s/Mpc , the dark energy sector, which is poorly constrained by the combination of CMB and BAO data, seems a more promising place to search for new physics. In summary, the discrepancies between the Planck results and the direct H0 measurements... are not large enough to provide compelling evidence for new physics beyond the base ΛCDM cosmology.

The direct estimates (red) of H0 (together with 1σ error bars) for the NGC 4258 distance anchor  and for all three distance anchors. The remaining (blue) points show the constraints from P13 for the base ΛCDM cosmology and some extended models combining CMB data with data from baryon acoustic oscillation surveys. The extensions are as follows: mν, the mass of a single neutrino species; mν + Ωk, allowing a massive neutrino species and spatial curvature; Neff , allowing additional relativistic neutrino-like particles; Neff +msterile, adding a massive sterile neutrino and additional relativistic particles; Neff+mν, allowing a massive neutrino and additional relativistic particles; w, dark energy with a constant equation of state w = p/ρ; w + wa , dark energy with a time varying equation of state. I give the 1σ upper limit on mν and the 1σ range for Neff . 
(Submitted on 14 Nov 2013 (v1), last revised 8 Feb 2014 (this version, v2))

"cosmologie de précision" : un terme à prendre avec des pincettes 




Chercher notre place au milieu de nulle part...
Tel pourrait être le propos de la cosmologie dans une perspective anthropologique. Mais ce blog ci n'est pas le lieu pour ce genre de débat. Le blogueur préfère laisser la parole de fin à une grande dame de l'enseignement de l'astronomie en France Lucienne Gougenheim en espérant que ce qui précède illustre bien l'actualité de sa conclusion générale extraite d'un exposé pédagogique sur la constante de Hubble et l'âge de l'Univers daté de 1996

  • La distance n'est pas le seul paramètre qui conditionne la valeur de H0...
  • La nature de la chandelle standard est complexe ; même quand nous avons une bonne connaissance théorique de la propriété qui sert de critère de distance, il convient de discuter l'importance des différents paramètres dont elle dépend.
  • On ne passe de la connaissance de H0 à celle de d'âge de l'univers que dans le cadre d'un modèle cosmologique.
  • ...un problème complexe ne peut se comprendre (et en conséquence se résoudre) que par la prise en compte de l'ensemble des paramètres dont il dépend...



mardi 9 septembre 2014

Shut-up and calculate* ... or converse before speculating ?

(A message of) the last of the pioneers of particle colliders

... I may be the last still around of the first generation of pioneers that brought colliding beam machines to reality.  I have been personally involved in building and using such machines since 1957 when I became part of the very small group that started to build the first of the colliders.   While the decisions on what to do next belong to the younger generation, the perspective of one of the old guys might be useful.  I see too little effort going into long range accelerator R&D, and too little interaction of the three communities needed to choose the next step, the theorists, the experimenters, and the accelerator people.  Without some transformational developments to reduce the cost of the machines of the future, there is a danger that we will price ourselves out of the market.
Burton Richter (Stanford University and SLAC National Accelerator Laboratory)
Wed, 3 Sep 2014

The high-energy colliders may not reach to heaven (and high-luminosity ones?)
In early 2015 the LHC will begin operations again at about 13 TeV compared to the 8-TeV operations before its recent shutdown for upgrading. 
The LHC itself is an evolving machine.  Its energy at its restart next year will be 13 TeV, slowly creeping up to its design energy of 14 TeV.  It will shut down in 2018 for some upgrades to detectors, and shut down again in 2022 to increase the luminosity.  It is this high-luminosity version (HL-LHC) that has to be compared to the potential of new facilities.  There has been some talk of doubling the energy of the LHC (HE-LHC) by replacing the 8-tesla magnets of the present machine with 16-tesla magnets, which would be relatively easy compared to the even more talked about bolder step to 100 TeV for the next project.  It is not clear to me why 30-TeV LHC excites so little interest, but that is the case.  
A large fraction of the 100 TeV talk (wishes?) comes from the theoretical community which is disappointed at only finding the Higgs boson at LHC and is looking for something that will be real evidence for what is actually beyond the standard model. Regrettably, there has been little talk so far among the three communities, experimenters, theorists, and accelerator scientists, on what constraints on the next generation are imposed by the requirement that the experiments actually produce analyzable data... 
The most important choice for a new, higher energy collider is its luminosity, which determines its discovery potential.  If a new facility is to have the same potential for discovery of any kind of new particles as had the old one, the new luminosity required is very roughly proportional to the square of the energy because cross sections typically drop as E-2.  A seven-fold increase in energy from that of HL-LHC to a 100-TeV collider therefore requires a fifty-fold increase in luminosity.  If the luminosity is not increased, save money by building a lower-energy machine where the discovery potential matches the luminosity.

String theorists ideas on physics might be popularized only in science fiction magazines ;-)
If you have seen the movie Particle Fever about the discovery of the Higgs boson, you have heard the theorists saying that the only choices today are between Super-symmetry and the Landscape.  Don’t believe them.  Super-symmetry says that every fermion has a boson partner and vice versa.  That potentially introduces a huge number of new arbitrary constants which does not seem like much progress to me.  However, in its simpler variants the number of new constants is small and a problem at high energy is solved.  But, experiments at the LHC already seem to have ruled out the simplest variants.    
The Landscape surrenders to perpetual ignorance.  It says that our universe is only one of a near infinity of disconnected universes, each with its own random collection of force strengths and constants, and we can never observe or communicate with the others.  We can never go further in understanding because there is no natural law that relates the different universes.  The old dream of deriving everything from one constant and one equation is dead.  There are two problems with the landscape idea.  The first is a logic one.  You cannot prove a negative, so you cannot say that there is no more to learn.  The second is practical.  If it is all random there is no point in funding theorists, experimenters, or accelerator builders.  We don’t have to wait until we are priced out of the market, there is no reason to go on 
There is a problem here that is new, caused by the ever-increasing mathematical complexity of today’s theory.  When I received my PhD in the 1950s it was possible for an experimenter to know enough theory to do her/his own calculations and to understand much of what the theorists were doing, thereby being able to choose what was most important to work on.  Today it is nearly impossible for an experimenter to do what many of yesterday’s experimenters could do, build apparatus while doing their own calculations on the significance of what they were working on.  Nonetheless, it is necessary for experimenters and accelerator physicists to have some understanding of where theory is, and where it is going.  Not to do so makes most of us nothing but technicians for the theorists.  Perhaps only the theory phenomenologists should be allowed to publish in general readership journals or to comment in movies. 
Id.

*A propos ... 

mardi 26 août 2014

Peut-on mettre un peu d'ordre dans le processus de sélection des théories physiques?

De la cohérence observationnelle à la cohérence mathématique... 
My first point is that the conditions of theory choice should be ordered. Frequently we see the listing of criteria for theory choice given in a flat manner, where one is not given precedence over the other a priori. We see consilience, simplicity, falsifiability, naturalness, consistency, economy, all together in an unordered list of factors when judging a theory. However, consistency must take precedence over any other factors. Observational consistency is obviously central to everyone, most especially our experimental colleagues, when judging the relevance of theory for describing nature. Despite some subtleties that can be present with regards to observational consistency (There can be circumstances where a theory is observationally consistent in a vast number of observables, but in a few it does not get right, yet no other decent theory is around to replace it. In other words, observational consistency is still the top criterion, but the best theory may not be 100% consistent.) it is a criterion that all would say is at the top of the list.
Mathematical consistency, on the other hand, is not as fully appreciated... Mathematical consistency has a preeminent role right up there with ob- servational consistency, and can be just as subtle, time-consuming and difficult to establish. We have seen that in the case of effective theories it trumps other theory choice considerations such as simpleness, predictivity, testability, etc 
My second point builds on the first. Since consistency is preeminent, it must have highest priority of establishment compared to other conditions. Deep, thoughtful reflection and work to establish the underlying self-consistency of a theory takes precedence over finding ways to make it more natural or to have less parameters (i.e., simple). Highest priority must equally go into understanding all of its observational implications. A theory should not be able to get away with being fuzzy on either of these two counts, before the higher order issues of simplicity and naturalness and economy take center stage. That this effort might take considerable time and effort should not be correlated with a theory’s value, just as it is not a theory’s fault if it takes humans decades to build a collider to sufficiently high energy and luminosity to test it. 
Additionally, dedicated effort on mathematical consistency of the theory, or class of theories, can have enormous payoffs in helping us understand and interpret the implications of various theory proposals and data in broad terms. An excellent example of that in recent years is by Adams et al. [15], who showed that some theories in the infrared with a cutoff cannot be self-consistently embedded in an ultraviolet complete theory without violating standard assumptions regarding superluminality or causality. The temptation can be high to start manipulating uninteresting theories into simpler and more beautiful versions before due diligence is applied to determine if they are sick at their cores. This should not be rewarded... 
Finally, I would like to make a comment about the implications of this discussion for the LHC and other colliders that may come in the future...  
In the years since the charm quark was discovered in the mid 1970’s there has been tremendous progress experimentally and important new discoveries, including the recent discovery of a Higgs boson-like state [20], but no dramatic new discovery that can put us on a straight and narrow path beyond the SM. That may change soon at the LHC. Nevertheless, it is expensive in time and money to build higher energy colliders, our main reliable transporter into the high energy frontier. This limits the prospects for fast experimental progress. 
In the meantime though, hundreds of theories have been born and have died. Some have died due to incompatibility of new data (e.g., simplistic technicolor theories, or simpleminded no-scale supersymmetry theories), but others have died under their own self-consistency problems (e.g., some extra-dimensional models, some string phenomenology models, etc.). In both cases, it was care in establishing consistency with past data and mathematical rigor that have doomed them. In that sense, progress is made. Models come to the fore and fall under the spotlight or survive. When attempting to really explain everything, the consistency issues are stretched to the maximum. For example, it is not fully appreciated in the supersymmetry community that it may even be difficult to find a “natural” supersymmetric model that has a high enough reheat temperature to enable baryogenesis without causing problems elsewhere [21a, 21b]. There are many examples of ideas falling apart when they are pushed very hard to stand up to the full body of evidence of what we already know. 
Relatively speaking, theoretical research is inexpensive. It is natural that a shift develop in fundamental science. The code of values in theoretical research will likely alter in time, as experimental input slows. Ideas will be pursued more rigorously and analysed critically. Great ideas will always be welcome. However, soft model building tweaks for simplicity and naturalness will become less valuable than rigorous tests of mathematical consistency. Distant future experimental implications identified for theories not fully vetted will become less valuable than rigorous computations of observational consistency across the board of all currently known data. One can hope that unsparing devotion to full consistency, both observational and mathematical, will be the hallmarks of the future era.

James D. Wells (Submitted on 3 Nov 2012)