samedi 31 janvier 2015

First, a (too) spectacular claim, [aftewards] a spectacular {statistically} unsignificant result!

First detection of inflationary gravitational waves probably did not occur in 2014

At the recombination epoch, the inflationary gravitational waves (IGW) contribute to the anisotropy of the CMB in both total intensity and linear polarization. The amplitude of tensors is conventionally parameterized by r, the tensor-to-scalar ratio at a fiducial scale. Theoretical predictions of the value of r cover a very wide range. Conversely, a measurement of r can discriminate between models of inflation. Tensor modes produce a small increment in the temperature anisotropy power spectrum over the standard [cosmological model] ΛCDM scalar perturbations at multipoles l<∼60; measuring this increment requires the large sky coverage traditionally achieved by space-based experiments, and an understanding of the other cosmological parameters. The effects of tensor perturbations on B-mode polarization is less ambiguous than on temperature or E-mode polarization over the range l<∼150...
Interstellar dust grains produce thermal emission, the brightness of which increases rapidly from the 100– 150 GHz frequencies favored for CMB observations, becoming dominant at ≥ 350 GHz even at high galactic latitude. The dust grains align with the Galactic magnetic field to produce emission with a degree of linear polarization [16]. The observed degree of polarization depends on the structure of the Galactic magnetic field along the line of sight, as well as the properties of the dust grains (see for example Refs. [17, 18]). This polarized dust emission results in both E-mode and B-mode, and acts as a potential contaminant to a measurement of r. Galactic dust polarization was detected by Archeops [19] at 353 GHz and by WMAP [2, 20] at 90 GHz. 
BICEP2 was a specialized, low angular resolution experiment, which operated from the South Pole from 2010 to 2012, concentrating 150 GHz sensitivity comparable to Planck on a roughly 1 % patch of sky at high Galactic latitude [21]. The BICEP2 Collaboration published a highly significant detection of B-mode polarization in excess of the r=0 lensed-ΛCDM expectation over the range 30 < l<150 in Ref. [22...]. Modest evidence against a thermal Galactic dust component dominating the observed signal was presented based on the cross-spectrum against 100 GHz maps from the previous BICEP1 experiment. The detected B-mode level was higher than that projected by several existing dust models [23, 24] although these did not claim any high degree of reliability.  
The Planck survey released information on the structure of the dust polarization sky at intermediate latitudes [25], and the frequency dependence of the polarized dust emission at frequencies relevant to CMB studies [26]. Other papers argued that the BICEP2 region is significantly contaminated by dust [27, 28]. Finally Planck released information on dust polarization at high latitude [29, hereafter PIP-XXX], and in particular examined a field centered on the BICEP2 region (but somewhat larger than it) finding a level of polarized dust emission at 353 GHz sufficient to explain the 150 GHz excess observed by BICEP2, although with relatively low signal-to-noise. [...] 
In this paper, we take cross-spectra between the joint BICEP2/Keck maps and all the polarized bands of Planck. [...]


Upper: BB spectrum of the BICEP2/Keck maps before and after subtraction of the dust contribution, estimated from the cross-spectrum with Planck 353 GHz. The error bars are the standard deviations of simulations, which, in the latter case, have been scaled and combined in the same way. The inner error bars are from lensed-ΛCDM+noise simulations as in the previous plots, while the outer error bars are from the lensed-ΛCDM+noise+dust simulations. Lower: constraint on r derived from the cleaned spectrum compared to the fiducial analysis shown in Figure 6.


[...] The r constraint curve peaks at r = 0.05 but disfavors zero only by a factor of 2.5. This is expected by chance 8% of the time, as confirmed in simulations of a dust-only model. We emphasize that this significance is too low to be interpreted as a detection of primordial B-modes. [...] 
In order to further constrain or detect IGW, additional data are required. The Planck Collaboration may be able to make progress alone using the large angular scale “reionization bump,” if systematics can be appropriately controlled [50]. To take small patch “recombination bump” studies of the type pursued here to the next level, data with signal-to-noise comparable to that achieved by BICEP2/Keck at 150 GHz are required at more than one frequency... During the 2014 season, two of the Keck Array receivers observed in the 95 GHz band and these data are under active analysis. BICEP3 will add substantial additional sensitivity at 95 GHz in the 2015, and especially 2016, seasons. Meanwhile many other ground-based and sub-orbital experiments are making measurements at a variety of frequencies and sky coverage fractions.
DataBICEP2/Keck and Planck Collaborations
30 January 2015




jeudi 22 janvier 2015

2015 : The Planck(-Bronstein) mass [concept] has more than 100 {only} 80 years

History of science can teach us something
Here is a long excerpt from a text by the researcher in philosophy and history of science Gennady Gorelik (available online) which illustrate the statement in the title:

Planck introduced his cGh values in 1899, without any connection to quantum gravity. Quantum limits to the applicability of general relativity (and, implicitly, their Planck scale) were first discovered in 1935 by the Soviet theorist Matvey P. Bronstein (1906-1938). It was not until the 1950s that the explicitly quantum-gravitational significance of the Planck values was pointed out almost simultaneously by several physicists.[...] In the fifth installment of his continuing study of irreversible radiation processes (Planck 1899), Max Planck introduced two new universal physical constants, a and b, and calculated their values from experimental data. The following year, he redesignated the constant b by the famous letter h(and in place of a, he introduced k = b/a, the Boltzmann constant).  
In 1899, the constant b (that is, h) did not yet have any quantum theoretical significance, having been introduced merely in order to derive Wien's formula for the energy distribution in the black-body spectrum. However, Planck had previously described this constant as universal. During the six years of his efforts to solve the problem of the equilibrium between matter and radiation, he clearly understood the fundamental, universal character of the sought-for spectral distribution. 
It was perhaps this universal character of the new constant that stimulated Planck, in that same paper of 1899, to consider a question that was not directly connected with the paper's main theme. The last section of the paper is entitled "Natural Units of Measure" ["Natürliche Maasseinheiten"]. Planck noted that in all ordinary systems of units, the choice of the basic units is made not from a general point of view "necessary for all places and times," but is determined solely by "the special needs of our terrestrial culture" (Planck 1899, p. 479). Then, basing himself upon the new constant h and also upon c and G, Planck suggested the establishment of

"units of length, mass, time, and temperature that would, independently of special bodies and substances, necessarily retain their significance for all times and all cultures, even extraterrestrial and extrahuman ones, and which may therefore be designated as natural units of measure." (Planck 1899, pp. 479-480)
[...]The quantum-gravitational meaning of the Planck values could be revealed only after a relativistic theory of gravitation had been developed. As soon as that was done, Einstein pointed out the necessity of unifying the new theory of gravitation with quantum theory. In 1916, having obtained the formula for the intensity of gravitational waves, he remarked:
"Because of the intra-atomic movement of electrons, the atom must radiate not only electromagnetic but also gravitational energy, if only in minute amounts. Since, in reality, this cannot be the case in nature, then it appears that the quantum theory must modify not only Maxwell's electrodynamics but also the new theory of gravitation." (Einstein 1916, p. 696).
For two decades after Einstein pointed out the necessity of a quantum-gravitational theory in 1916, only a few remarks about this subject appeared. There were too many other more pressing theoretical problems (including quantum mechanics, quantum electrodynamics, and nuclear theory). And, the remarks that were made were too superficial, which is to say that they assumed too strong an analogy between gravity and electromagnetism. For example, after discussing a general scheme for field quantization in their famous 1929 paper, Heisenberg and Pauli wrote:
"One should mention that a quantization of the gravitational field, which appears to be necessary for physical reasons, may be carried out without any new difficulties by means of a formalism wholly analogous to that applied here. (Heisenberg and Pauli 1929, p. 3)"
They grounded the necessity of a quantum theory of gravitation on Einstein's mentioned remark of 1916 and on Oskar Klein's remarks in an article of 1927 in which he pointed out the necessity of a unified description of gravitational and electromagnetic waves, one taking into account Planck's constant h. 
Heisenberg and Pauli obviously intended that quantization techniques be applied to the linearized equations of the (weak) gravitational field (obtained by Einstein in 1916). Being clearly approximative, this approach allows one to hope for an analogy with electromagnetism, but it also allows one to disregard some of the distinguishing properties of gravitation—its geometrical essence and its nonlinearity. Just such an approach was employed by Leon Rosenfeld, who considered a system of quantized electromagnetic and weak gravitational fields (Rosenfeld 1930), studying the mutual transformations of light and "gravitational quanta" (a term that he was the first to use). 
The first really profound investigation of the quantization of the gravitational field was undertaken by Matvey P. Bronstein. The essential results of his 1935 dissertation, entitled "The Quantization of Gravitational Waves," were contained in two papers published in 1936. The dissertation was mainly devoted to the case of the weak gravitational field, where it is possible to ignore the geometrical character of gravitation, that is, the curvature of space-time. However, Bronstein's work also contained an important analysis revealing the essential difference between quantum electrodynamics and a quantum theory of gravity not thus restricted to weak fields and "nongeometricness." This analysis demonstrated that the ordinary scheme of quantum field theory and the ordinary concepts of Riemannian geometry are not sufficient for the formulation of a consistent theory of quantum gravity. At the same time, Bronstein's analysis led to the limits of quantum-gravitational physics (and to Planck's cGh-values). [...] 
For two decades after Bronstein's work, there was calm in the field of quantum gravity. Only in the mid-1950s did the length l0 = (Gh/c3)1/2 appear almost simultaneously in a few different forms in a few papers. For example, in 1954, Landau pointed out that the length l= G1/2h/ce (= a-1/2l0, very near to the Planck length) is "the limit of the region outside of which quantum electrodynamics cannot be considered as a self-consistent theory because of the necessity of taking into account gravitational interactions" (Gm^2/r ~ e2/r, when m ~ p/c ~ h/lc) (Abrikosov, Landau, Khalatnikov 1954).[...] 
The term "Planck values," which is now generally accepted, was introduced later (Misner and Wheeler 1957). According to Wheeler, he did not know in 1955 about Planck's "natural units" (private communication).
A history of the Planck values provides interesting material for reflections on timely and premature discoveries in the history of science. Today, the Planck values are more a part of physics itself than of its history. They are mentioned in connection with the cosmology of the early universe as well as in connection with particle physics. In considering certain problems associated with a unified theory (including the question of the stability of the proton), theorists discovered a characteristic mass ~ 1016mp (mpis the proton mass). To ground such a great value, one first refers to the still greater mass 1019mp. In the words of Steven Weinberg:
"This is known as the Planck mass, after Max Planck, who noted in 1900 that some such mass would appear naturally in any attempt to combine his quantum theory with the theory of gravitation. The Planck mass is roughly the energy at which the gravitational force between particles becomes stronger than the electroweak or the strong forces. In order to avoid an inconsistency between quantum mechanics and general relativity, some new features must enter physics at some energy at or below 1019 proton masses." (Weinberg 1981, p. 71).
The fact that Weinberg takes such liberties with history in this quotation is evidence of the need to describe the real historical circumstances in which the Planck mass arose. As we saw, when Planck introduced the mass (ch/G)1/2 (~1019mp) in 1899, he did not intend to combine the theory of gravitation with quantum theory; he did not even suppose that his new constant would result in a new physical theory. The first "attempt to combine the quantum theory with the theory of gravitation," which demonstrated that "in order to avoid an inconsistency between quantum mechanics and general relativity, some new features must enter physics," was made by Bronstein in 1935. That the Planck mass may be regarded as a quantum-gravitational scale was pointed out explicitly by Klein and Wheeler twenty years later. At the same time, Landau also noted that the Planck energy (mass) corresponds to an equality of gravitational and electromagnetic interactions.
by Gennady Gorelik (1992)
Studies in the history of general relativity. [Einstein Studies. Vol.3].


To know more about Matvei Bronstein,  (an Ettore Majorana who came in from the cold, so to speak)

lundi 19 janvier 2015

Qu'est ce que mesure vraiment le nombre effectif de neutrinos dans le modèle cosmologique standard dit de concordance ?

Réponse de blogueur et commentaires d'experts 
Encore un billet sur les neutrinos ! Oui ça commence à faire beaucoup. On en parlait dans le billet précédent et il y a plus longtemps ici ou  -  mais que voulez-vous chers lecteurs-trices se sont les seules particules détectables à ce jour dont les propriétés nous permettent de sonder la physique au delà du Modèle Standard, brefs les seuls messagers fiables  porteurs d('un)e nouvelle( phy)s(iques) à se mettre sous la dent (pour le moment bien sûr). Alors il est toujours utile d'y revenir, particulièrement lorsque la lecture d'un billet de blog spécialisé (et de ses commentaires) comme ci-dessous nous permet de découvrir clairement les hypothèses implicites des modèles explicatifs de certaines données expérimentales pas toujours très explicites :
One interesting result [from the Planck satellite] is the new improved constraint on the effective number of neutrinos, Neff in short. The way this result is presented may be confusing. We know perfectly well there are exactly 3 light active (interacting via weak force) neutrinos; this has been established in the 90s at the LEP collider, and Planck has little to add in this respect. Heavy neutrinos, whether active or sterile, would not show in this measurement at all. For light sterile neutrinos, Neff implies an upper bound on the mixing angle with the active ones. The real importance of Neff lies in that it counts any light particles (other than photons) contributing to the energy density of the universe at the time of CMB decoupling. Outside the standard model neutrinos, other theorized particles could contribute any real positive number to Neff, depending on their temperature and spin. A few years ago there have been consistent hints of Neff much larger 3, which would imply physics beyond the standard model. Alas, Planck has shot down these claims. The latest number combining Planck and Baryon Acoustic Oscillations is Neff =3.04±0.18, spot on 3.046 expected from the standard model neutrinos. This represents an important constraint on any new physics model with very light (less than eV) particles.
Jester, Blog Résonaances,  Saturday, 13 December 2014
Anonymous comment:
Note that the standard value of N_eff = 3.046 for 3 active neutrinos relies on several assumptions:
*) 3 active neutrinos in the Standard Model
*) No partly thermalised lights species
*) Reheating happened above ~4 MeV
*) No entropy production between 1 MeV and today
*) No cooling of photons between 1 MeV and today (e.g. through Dark Sector mixing)
Only the first assumption was verified by LEP, so there were plenty of room for N_eff to be different from 3.046.

Jester reply:
I completely agree, maybe except that "Reheating happened above ~4 MeV" is independently confirmed by nucleosynthesis. I didn't mean that Neff is not useful, on the contrary. I meant that it is often presented as a measurement of the number of neutrinos, which may be misleading.

En attendant le retour du lièvre continuons à écouter se que nous raconte la tortue 
Autrement dit, approfondissons patiemment nos connaissances sur le rôle des neutrinos dans l'astrophysique des débuts de l'univers vue depuis nos modernes spectro-télescopes collecteurs-analyseurs de photons ayant voyagés plus de dix milliards d'années ... avant que les physiciens des particules ne relancent leur accélérateur géant en espérant trouver rapidement des réponses à leurs questions dans la collision-désintégration de quelques paires quark-antiquark ou gluons-gluons en une fraction de seconde (ou presque).
Physics Beyond the Standard Models (BSMs), i.e. beyond Electro-Weak Model and beyond Standard Cosmological Model (... also called λ Cold Dark Matter model) is required for the explanation of the astrophysical and cosmological observational data. Namely, the contemporary SCM, contains considerable BSMs components - the so called dark energy (DE) and dark matter (DM), both with yet unrevealed nature, alas. These constitute 96% of the universe matter today, and play a considerable role at the matter dominated epoch, i.e. at later stages of the Universe evolution! 
BSMs physics is needed also for revealing the nature and the characteristics of the inflaton (the particle/field responsible for inflationary expansion stage) and CP-violation (CPV) or/and B[aryon number]-violation (BV) mechanisms. These are expected necessary ingredients in the theories of inflation and baryon asymmetry generation, which are the most widely accepted today hypotheses providing natural explanations of numerous intriguing observational characteristics of our universe. 
The inflationary theory explains naturally and elegantly the initial conditions of the universe in the pre-Friedmann epoch, namely: the extraordinary homogeneity and isotropy at large scales of the universe at its present epoch; its unique isotropy at the Cosmic Microwave Background (CMB) formation epoch (when the universe was ∼ 380000 years old); its unique flatness and the pattern of structures it has. Besides, the inflationary early stage explains the lack of topological defects in the universe. While the baryon asymmetry generation models explain the locally observed matter-antimatter asymmetry of the universe. 
...we have been already the lucky witnesses of the experimental establishment of the BSM physics in the neutrino sector. Experimental data on neutrino oscillations firmly determined three neutrino mixing angles and three mass differences, corresponding to the existence of at least two non-zero neutrino masses. The concrete neutrino mass pattern and possible CPV mechanism are to be detected in near future. Thus, the neutrino experimental data ruled out the Standard Models assumptions about zero neutrino masses and mixing and about flavor lepton number (L) conservation. Cosmology provides complementary knowledge about neutrino and BSM physics in the neutrino sector, because neutrino had a considerable influence on the processes during different epochs of the universe evolution. At the hot early universe stage, radiation dominated (RD) stage, light neutrinos were essential ingredients of the universe density, determining the dynamics of the universe.
Neutrinos played also an essential role in different processes as for example Big Bang nucleosynthesis (BBN). In particular, electron neutrino participated in the pre-BBN neutron-proton transitions, that took place during the first seconds, and nucleons freezing, and thus they influenced considerably the primordial production of the light elements (BBN) during the first minutes of the universe. Hence, BBN is very sensitive to the number of the light neutrino types, neutrino characteristics, neutrino chemical potentials, the possible presence of sterile neutrino, etc. BBN is capable to differentiate different neutrino flavors, because νe participates into proton-neutron transitions in the pre-BBN epoch, essential for yields of the primordially produced elements, while νµ and ντ do not exert kinetic effect on BBN 
At later stages of the universe evolution (T < eV) relic neutrinos, contributing to the matter density, influenced CMB anisotropies, played a role in the formation of galaxies and their structures. CMB and Least Scattering Surface (LSS), being sensitive to the total neutrino density and provide information about the neutrino masses and number of neutrino species. Hence, although the relic neutrinos, called Cosmic Neutrino Background (CNB) are not yet directly detected, strong observational evidence for CNB and stringent cosmological constraints on relic neutrino characteristics exist from BBN, CMB and LSS data. In particular, the determinations of light elements abundances and BBN theory predictions are used to put stringent constraints on neutrino characteristics (the effective number of relativistic particles, lepton asymmetry, sterile neutrino characteristics, neutrino mass differences and mixings) while CMB and LSS data provide constraints on neutrino masses and neutrino number density corresponding to CMB and LSS formation epochs.
Daniela Kirilova (Submitted on 7 Jul 2014)

jeudi 4 décembre 2014

D'un Noël à l'autre le nombre de neutrinos effectifs se rapproche de celui des Rois Mages

Noël 2012 
Just before Christmas [2013], the WMAP collaboration posted the 9-years update of their Cosmic Microwave Background [CMB] results...
The effective number of relativistic degrees of freedom at the time of CMB decoupling, the so-called Neff parameter, is now Neff = 3.26 ± 0.35 3.84 ± 0.40, compared to Neff= 4.34 ± 0.87 quoted in the 7-years analysis. For the fans and groupies of this observable it was like finding a lump of coal under the christmas tree...

So, what is this mysterious Neff parameter? According to the standard cosmological model, at the temperatures above 10 000 Kelvin the energy density of the universe was dominated by a plasma made of neutrinos (40%) and photons (60%). The photons today make the CMB about which we know everything. The neutrinos should also be around, but for the moment we cannot study them directly. However we can indirectly infer their presence in the early universe via other observables. First of all, the neutrinos affect the energy density stored in radiation... which controls the expansion of the Universe during the epoch of radiation domination. The standard model predicts Neff equal to the number of known neutrinos species, that is Neff=3 (in reality 3.05, due to finite temperature and decoupling effects). Thus, by measuring how quickly the early Universe was expanding, we can determine Neff. If we find Neff≈3 we confirm the standard model and close the store. On the other hand, if we measured that Neff is significantly larger than 3, that would mean a discovery of additional light degrees of freedom in the early plasma that are unaccounted for in the standard model. Note that these new hypothetical particles don't have to be similar to neutrinos, in particular they could be bosons, and/or have a different temperature (in which case they would correspond to non-integer increase of Neff). All that is required from them is that they are weakly interacting and light enough to be relativistic at the time of CMB decoupling. Theorists have dreamed up many viable candidates that could show up in Neff : additional light neutrinos species, axions, dark photons, etc... 
The interest of particle physicists in Neff come from the fact that, until recently, the CMB data also pointed at Neff≈4 with a comparable error. The impact of Neff on the CMB is much more contrived, and there are many separate effects one needs to take into account. For example, larger Neff delays the moment of matter-radiation equality, which affects the relative strength and positions of the peaks. Furthermore, Neff affects how the perturbations grow during the radiation era, which may show up in the CMB spectrum at l ≥ 100. Finally, the larger Neff, the larger is the effect of Silk damping at l ≥ 1000. Each single observable has a large degeneracy with other input parameters (matter density, Hubble constant, etc.) but, once the CMB spectrum is measured over a large range of angular scales, these degeneracies are broken and stringent constraints on Neff can be derived. That is what happened recently, thanks to the high-l CMB measurements from the ACT and SPT telescopes, and some input from other astrophysical observations. The net result [Neff = 3.84 ± 0.40] ... using [the CMB data] in addition [with] an input from Baryon Acoustic Oscillations and Hubble constant measurements... can be well accounted for by the three boring neutrinos of the standard model.
Jester, Friday, 18 January 2013
Noël 2014
Les nouveaux résultats de la collaboration Planck portent aussi sur un autre type de particules très élusives : les neutrinos. Ces particules élémentaires « fantômes », produites en abondance dans le Soleil par exemple, traversent notre planète pratiquement sans interaction, ce qui rend leur détection extrêmement difficile. Il n’est donc pas envisageable de détecter directement les premiers neutrinos, produits moins d’une seconde après le Big-Bang, qui sont extrêmement peu énergétiques. Pourtant, pour la première fois, Planck a détecté sans ambiguïté l’effet de ces neutrinos primordiaux sur la carte du rayonnement fossile.

Les neutrinos primordiaux décelés par Planck ont été libérés une seconde environ après le Big-Bang, lorsque l’univers était encore opaque à la lumière mais déjà transparent à ces particules qui peuvent s’échapper librement d’un milieu opaque aux photons, tel que le cœur du Soleil. 380 000 ans plus tard, lorsque la lumière du rayonnement fossile a été libérée, elle portait l’empreinte des neutrinos car les photons ont interagi gravitationnellement avec ces particules. Ainsi, observer les plus anciens photons a permis de vérifier les propriétés des neutrinos.
PRÉLIMINAIRE - Contraintes et lien entre le nombre d’espèces de neutrinos, la vitesse d’expansion de l’univers aujourd’hui H0 et le paramètre σ8 qui caractérise la structuration de la matière à grande échelle. Les points de couleur correspondent aux contraintes température + effet de lentille gravitationnelle uniquement, les contours noirs en ajoutant la polarisation à toutes les grandes échelles angulaires et les oscillations acoustiques de baryons. Les lignes verticales correspondent à la valeur de Neff prédite par divers modèles : la ligne pleine correspond au modèle standard, les lignes pointillées à des modèles avec une quatrième espèce de neutrino (selon le type de neutrino, actif ou stérile, et l'époque de leur découplage). © ESA - collaboration Planck
Les observations de Planck sont conformes au modèle standard de la physique des particules. Elles excluent quasiment l’existence d’une quatrième famille de neutrinos auparavant envisagée d’après les données finales du satellite WMAP, le prédécesseur américain de Planck. Enfin, Planck permet de fixer une limite supérieure à la somme des masses des neutrinos, qui est à présent établie à 0.23 eV (électronvolt).
Les données de la mission complète et les articles associés qui seront soumis à la revue Astronomy & Astrophysics (A&A) seront disponibles dès le 22 décembre 2014 sur le site de l’ESA. Ces résultats sont notamment issus des mesures faites avec l’instrument haute fréquence HFI conçu et assemblé sous la direction de l’Institut d’astrophysique spatiale (CNRS/Université Paris-Sud) et exploité sous la direction de l’Institut d’astrophysique de Paris (CNRS/UPMC) par différents laboratoires impliquant le CEA, le CNRS et les universités, avec des financements du CNES et du CNRS.
Communiqué de presse du CNRS, Lundi, 1 Décembre 2014

Premier Noël 
...il n’est précisé nulle part dans la Bible le nombre de ces « Rois » mages, et encore moins leur nom! Cela reste donc sujet à interprétation suivant les auteurs: ils sont seulement deux sur les ornements muraux des catacombes de Saint-Pierre, trois dans les catacombes de Priscille ou quatre dans les catacombes de Domitille. La tradition syrienne considère même qu’il étaient au nombre de douze! ... 

Pourtant, au fil des siècles, la coutume tend à les considérer au nombre de trois… Pourquoi? Tout simplement parce que l’Évangile de Matthieu mentionne l’existence de trois cadeaux donnés à Jésus: l’or (symbole de la royauté – les Rois mages voyaient en Jésus-Christ le futur roi des Juifs...), l’encens (symbole de la divinité) et la myrrhe (très employée dans les rites d’embaumement, elle symbolise l’humanité de Jésus – même si cette interprétation ne fait pas l’unanimité)... 

Les noms de Melchior, Gaspard et Balthazar apparaissent pour la première fois au VIe siècle après Jésus Christ dans un Évangile apocryphe... Mais il y a pire! Les « Rois-Mages » n’étaient en réalité pas rois! Ils étaient seulement mages, c’est-à-dire spécialistes d’astronomie et de divination.
Les Rois Mages n’étaient pas trois. D'ailleurs, ils n'étaient même pas rois...
Djinnzz, le 16/07/2013

mercredi 3 décembre 2014

Pourquoi le blogueur Jester (physicien des particules) est-il si (raisonnablement) "méchant" (avec ses collègues astrophysiciens)?

Hypothèse 1 : parce qu'il sait que toute preuve expérimentale est probablement fausse (sans estimation correcte de son incertitude) jusqu'à preuve du contraire (sa reproductibilité)
There indeed seems to be an excess in the 2-4 GeV region. However, given the size of the error bars and of the systematic uncertainties, not to mention how badly we understand the astrophysical processes in the galactic center, one can safely say that there is nothing to be excited about for the moment. 

resonaances.blogspot.fr/2009/11/fermi-says-nothinglike-sure-sure.html 

It is well known that sigmas come in varieties: there or more significant 3 sigmas, less significant 3 sigmas, and astrophysical 3 sigmas. 

http://resonaances.blogspot.fr/2011/04/another-3-sigma-from-cdf.html 

Notice that different observations of the helium abundance are not quite consistent with each other, but that's normal in astrophysics; the rule of thumb is that 3 sigma uncertainty in astrophysics is equivalent to 2 sigma in conventional physics. 

http://resonaances.blogspot.fr/2013/01/how-many-neutrinos-in-sky.html 

Although the natural reaction here is a resounding "are you kidding me", the claim is that the excess near 3.56 keV ...  over the background model is very significant, at 4-5 astrophysical sigma. It is difficult to assign this excess to any known emission lines from usual atomic transitions. If the excess is interpreted as a signal of new physics, one compelling (though not unique) explanation is in terms of sterile neutrino dark matter. In that case, the measured energy and intensity of the line correspond to the the neutrino mass 7.1 keV and the mixing angle of order 5*10^-5, see the red star in the plot. This is allowed by other constraints and, by twiddling with the lepton asymmetry in the neutrino sector, consistent with the observed dark matter relic density.
Clearly, a lot could possibly go wrong with this kind of analysis. For one thing, the suspected dark matter line doesn't stand alone in the spectrum. The background mentioned above consists not only of continuous X-ray emission but also of monochromatic lines from known atomic transitions. Indeed, the 2-10 keV range where the search was performed is pooped with emission lines: the authors fit 28 separate lines to the observed spectrum before finding the unexpected residue at 3.56 keV. The results depend on whether these other emission lines are modeled properly. Moreover, the known Ar XVII dielectronic recombination line happens to be nearby at 3.62 keV. The significance of the signal decreases when the flux from that line is allowed to be larger than predicted by models. So this analysis needs to be confirmed by other groups and by more data before we can safely get excited.
 
http://resonaances.blogspot.fr/2014/02/signal-of-neutrino-dark-matter.html



Hypothèse 2 : Parce qu'il est un peu las de ne pas trouver dans son laboratoire la nouvelle physique (que les autres voient déjà dans leur observatoire)
There is no evidence of new physics from accelerator experiments (except, perhaps, for the 3-3.5 σ discrepancy of the muon (g-2) 7a, 7b, 8)). Most of the experimental evidence for new physics comes from the sky like for Dark Energy, Dark Matter, baryogenesis and also neutrino oscillations (that were first observed in solar and atmospheric neutrinos). One expected new physics at the electroweak scale based on a ”natural” solution of the hierarchy problem 4). The absence so far of new physics signals casts doubts on the relevance of our concept of naturalness. 
(Submitted on 8 Jul 2014 (v1), last revised 17 Jul 2014 (this version, v2))



mardi 25 novembre 2014

Il faut sauver le physicien John Bell (des griffes d'un blogueur polémiste)

Une réaction personnelle à un billet de Lubos Motl  
J. Bell : a mediocre physicist ? Do you talk about the same guy who discovered with R. Jackiw (and independently S. Adler) chiral anomaly, such an important phenomenon in quantum field theory? I can agree with all your technical arguments to support QM against classical zealots but the pedagogical value of your post would be undermined in my opinion if you would not recognize the pedagogical usefulness of the Bell's theorem and if you would not make the difference between necessarily old-fashioned conceptions or terminology used by Bell in the sixties and the loosely-defined concepts of a large number of QM's contenders nowadays.

John Bell et sa plus grande contribution à la physique 
... John Bell codiscovered the mechanism of anomalous symmetry breaking in quantum field theory. Indeed, our paper on this subject is his (and my) most-cited work. The symmetry breaking in question is a quantum phenomenon that violates the correspondence principle; it arises from the necessary infinities of quantum field theory. Over the years it has become evident that theoretical/mathematical physicists are not the only ones to acknowledge this effect. Nature makes fundamental use of the anomaly in at least two ways: the neutral pion’s decay into two photons is controlled by the anomaly [1, 2] and elementary fermions (quarks and leptons) arrange themselves in patterns such that the anomaly cancels in those channels to which gauge bosons – photon, W, Z – couple [3]. (There are also phenomenological applications of the anomaly to collective, as opposed to fundamental, physics – for example, to edge states in the quantum Hall effect.)
R. Jackiw, november 2000

Le mot de la fin à Richard Feynman et Alain Aspect
Chaque fois que l’on se replonge dans le problème que nous venons de présenter, on ne peut s’empêcher de se poser la question : y a-t-il un problème réel ? Il faut reconnaître que la réponse à cette question peut varier, même pour les plus grands physiciens. En 1963, R. Feynman donnait une première réponse à cette question dans son fameux cours de physique48 : « Ce point ne fut jamais accepté par Einstein… il devint connu sous le nom de paradoxe d’Einstein-Podolsky-Rosen. Mais lorsque la situation est décrite comme nous l’avons fait ici, il ne semble pas y avoir quelque paradoxe que ce soit … ». Deux décennies plus tard, Feynman exprimait une opinion radicalement différente, toujours sur la situation EPR : « nous avons toujours eu une très grande difficulté à comprendre la vision du monde que la Mécanique Quantique implique … Il ne m’est pas encore apparu évident qu’il n’y ait pas de problème réel… Je me suis toujours illusionné moi même, en confinant les difficultés de la Mécanique Quantique dans un recoin de plus en plus petit, et je me retrouve de plus en plus chagriné par ce point particulier. Il semble ridicule de pouvoir réduire ce point à une question numérique, le fait qu’une chose est plus grande qu’une autre chose. Mais voilà : – elle est plus grande … »
Alain Aspect
We must be grateful to John Bell for having shown us that philosophical questions about the nature of reality could be translated into a problem for physicists, where naive experimentalists can contribute. 
Alain Aspect (Submitted on 2 Feb 2004)



vendredi 31 octobre 2014

Faites moi peur (ou Vous voulez me rassurez) ...

//Le blogueur fête aujourd'hui Halloween à sa manière en convoquant quelques physiciens qui n'hésitent pas à parler du scénario cauchemardesque (nightmare scenario) de la physique des hautes énergies, à savoir pas de phénomènes au delà de ceux prévus par le Modèle Standard observables au LHC. Le but est évidemment de se rassurer en montrant que ses mêmes physiciens réfléchissent sur ce qui pourrait faire avancer leur discipline.


... Monsieur Shifman
String theory appeared as an extension of the dual resonance model of hadrons in the early 1970, and by mid-1980 it raised expectations for the advent of “the theory of everything” to Olympic heights. Now we see that these heights are unsustainable. Perhaps this was the greatest mistake of the string-theory practitioners. They cornered themselves by promising to give answers to each and every question that arises in the realm of fundamental physics, including the hierarchy problem, the incredible smallness of the cosmological constant, and the diversity of the mixing angles. I think by now the “theory-of-everything-doers” are in disarray, and a less formal branch of string theory is in crisis [a more formal branch evolved to become a part of mathematics or (in certain occasions) mathematical physics]. 
At the same time, leaving aside the extreme and unsupported hype of the previous decades, we should say that string theory, as a qualitative extension of field theory, exhibits a very rich mathematical structure and provides us with a new, and in a sense superior, understanding of mathematical physics and quantum field theory. It would be a shame not to explore this structure. And, sure enough, it was explored by serious string theorists. 
The lessons we learned are quite illuminating. First and foremost we learned that physics does not end in four dimensions: in certain instances it is advantageous to look at four dimensional physics from a higher-dimensional perspective... A significant number of advances in field theory, including miracles in N = 4 super-Yang-Mills... came from the string-theory side...
... since the 1980s Polyakov was insisting that QCD had to be reducible to a string theory in 4+1 dimensions. He followed this road... arriving at the conclusion that confinement in QCD could be described as a problem in quantum gravity. This paradigm culminated in Maldacena’s observation (in the late 1990’s) that dynamics of N=4 super-Yang- Mills in four dimensions (viewed as a boundary of a multidimensional bulk) at large N can be read off from the solution of a string theory in the bulk... 
Unfortunately (a usual story when fashion permeates physics), people in search of quick and easy paths to Olympus tend to overdo themselves. For instance, much effort is being invested in holographic description in condensed matter dynamics (at strong coupling). People pick up a supergravity solution in higher dimensions and try to find out whether or not it corresponds to any sensible physical problem which may or may not arise in a condensed matter system. To my mind, this strategy, known as the “solution in search of a problem” is again a dead end. Attempts to replace deep insights into relevant dynamics with guesses very rarely lead to success.
(Submitted on 31 Oct 2012 (v1), last revised 22 Nov 2012 (this version, v3))

... Monsieur White
In his overview talk[1] at Strings 2013, David Gross discussed the “nightmare scenario” in which the Standard Model Higgs boson is discovered at the LHC but no other new short-distance physics, in particular no signal for SUSY, is seen. He called it the “extreme pessimistic scenario” but also said it was looking more and more likely and (if it is established) then, he acknowledged
“We got it wrong.” “How did we misread the signals?” “What to do?”.
He said that if it comes about definitively the field, and string theorists in particular, will suffer badly. He said that it will be essential for theorists who entered the field most recently to figure out where previous generations went wrong and also to determine what experimenters should now look for.
In the following, I will argue that a root cause has been the exaggeration of the significance of the discovery of asymptotic freedom that has led to the historically profound mistake of trying to go forward by simply formulating new short-distance theories, supersymmetric or otherwise, while simultaneously ignoring both deep infra- red problems and fundamental long-distance physics.
In his recent “Welcome” speech[2] at the Perimeter Institute, Neil Turok expressed similar concerns to those expressed by Gross. He said that
“All the {beyond the Standard Model} theories have failed ... Theoretical physics is at a crossroads right now ... {there is} a very deep crisis.”
He argued that nature has turned out to be simpler than all the models - grand unified, super-symmetric, super-string, loop quantum gravity, etc, and that string theorists, especially, are now utterly confused - with no predictions at all. The models have failed, in his opinion, because they have no new, simplifying, underlying principle. They have complicated the physics by adding extra parameters, without introducing any simplifying concepts.

(Submitted on 5 Jun 2014)