The Future of our Universe (Part 1)

Cosmology and String Theory: The Future of our Universe

In order to solve the equations of general relativity, and therefore to be able to predict the future behavior of the cosmic space-time geometry, it is essential to have a precise knowledge of at least three parameters peculiar to the current cosmological state.

For instance, (1) the current average energy density at cosmological scales, (2) the corresponding equation of state, i.e., the average pressure of the dominant cosmological sources, and (3) the current value of the average spatial curvature. Note that the third requirement does not refer to the full, space-time curvature, but rather to the curvature of the geometric manifold obtained by taking a spatial “slice” of the Universe at a given time.

It should be stressed that the three reference parameters could be different from the ones just mentioned. In the context of the standard cosmological model, in particular, the equation of state is fixed by assuming that the average current pressure is zero. Then, a measurement of the energy density alone is enough to determine the spatial curvature, which turns out to be positive, negative, or zero depending on whether the energy density is higher than, equal to, or smaller than a certain value called the critical density (about 10^−29 g per cubic centimeter). However, in order to fix the critical density, and compare its value with the present density, we need to determine a third characteristic parameter of our epoch: the Hubble parameter H0, the one controlling (to a first approximation) the present recession velocity of the galaxies.

Given H0, the cosmological energy density, and the pressure (the latter assumed to be zero) as three independent parameters, the standard model can then unambiguously establish whether we live in a Universe that will remain forever in a state of decelerated expansion (critical density, zero spatial curvature), progressively approach the flat and empty Minkowski space (sub-critical density, negative spatial curvature), or stop expanding and subsequently recollapse (super-critical density, positive spatial curvature).

To test the standard model, and hence decide indirectly which fate among the various possibilities the Universe is going to realise, astronomers and astrophysicists have employed all their skills to obtain more and more accurate measurements of the observable quantities characterizing the current cosmological state. Those observational developments brought a first surprise about thirty years ago, one that forced us to consider a first modification of the simplest, original form of the standard model.

The standard model originally assumed that the type of matter that currently represents the dominant form of energy over cosmological scales should be made of atoms, and in particular protons and neutrons present in their nuclei, contained not only in the planets and stars but also (with great abundance) in the dust filling the galactic and intergalactic space. We may synthetically refer to this type of matter as baryonic matter, since the baryons are a class of “heavy” elementary particles, having the proton itself as the most fundamental and stable component.

READ:  The Universe Before the Big Bang: Quantum Cosmology (Part 1)

Why should baryons represent the currently dominating component of the cosmic energy? The answer is simple. Since baryons are heavy particles, as the Universe becomes progressively colder their kinetic energy becomes negligible, i.e., they become non- relativistic, almost static particles which can on average be de- scribed in terms of a zero-pressure gas. Then, according to the Einstein equations, their energy density decreases with time as the reciprocal of the volume, i.e., the reciprocal of the third power of the spatial radius. On the other hand, the energy density of relativistic particles – hence that of radiation – decreases more rapidly, as the reciprocal of the fourth power of the radius (see Chap. 2).

This implies that, as the Universe has expanded, the radiation energy density has been diluted much faster than the baryonic energy density, and today should have become a subdominant component of the fluid filling the Universe on cosmological scales.

This prediction is well confirmed by direct observations. Collecting together all the radiation and the relativistic particles currently observed, the result is that their energy density is about one ten thousandth of (i.e., 10^−4 times smaller than) the critical density, the main contribution to this number coming from the cosmic electromagnetic background already mentioned in previous chapters. Summing instead all the baryons within the galaxies, the intergalactic dust, and all visible matter one gets an energy density which is about one hundredth of the critical value, hence one hundred times bigger than the radiation energy density. Therefore, it would seem safe to conclude that the Universe is presently dominated by a gas of non-relativistic baryonic particles with zero average pressure.

This quite simple conclusion, consistent with the standard model, has nevertheless been blatantly contradicted by other astrophysical observations. In fact, if the main gravitational source is a gas with zero pressure, then the Einstein cosmological equations tell us that the ratio between the density of this gas and the critical density is proportional (with a factor of two) to a kinematical quantity called the deceleration parameter, whose value depends only on the acceleration of the Universe. Now, the measurements of this parameter, despite large errors and uncertainties, have provided us with a value which, as early as the 1970s, was known to be of order one, thus implying that the matter density is of the same order as the critical density. But the baryons – as stressed above – have a density which is only one hundredth of the critical value, so they cannot be the current dominant form of energy!

READ:  General Relativity and Standard Cosmology (Part 1)

This sort of enigma, also called the missing mass problem in the 1970s, can be solved by assuming that the dominant form of energy in the present Universe is made of non-baryonic, non- relativistic matter which is invisible to optical observations (or to other types of direct electromagnetic detection), and whose effects are of a purely gravitational nature, e.g., through its influence upon the space-time curvature and the expansion rate of the Universe.

Such a cosmic fluid has been dubbed dark matter. The introduction of this matter component undoubtedly explains not only the discrepancy between the observed values of critical density and the baryon density, but also other important astrophysical observations. As an example, the presence of dark matter in the galactic halo explains why the stars rotate around the galactic center more rapidly than is predicted by the standard gravitational theory under the assumption of an empty interstellar medium.

Over the last twenty years, the dark matter hypothesis has in- spired a very great deal of research, both theoretical and experimental. From a theoretical perspective various attempts have been made to develop physically “acceptable” models of dark matter. Indeed, the crucial question is: If it is not baryonic, what kinds of particles are the basic components of dark matter? A number of models have been studied, with conventional non-baryonic particles (e.g., neutrinos), with more exotic particles (axions, dilatons, etc.), and also with supersymmetric particles (photinos, gravitinos, etc.). On the experimental side there have been attempts to detect this kind of “invisible” matter either directly or indirectly, exploiting various types of observation. For instance, astrophysical observations measuring the micro-lensing effect, i.e., the microscopic amplification of light rays emitted by stars due to the gravitational field of dark matter. Other dark matter searches are based on underground observations, i.e., observations carried out with particle detectors located underground, in order to remove other signals (like those produced by cosmic rays) that could mask the effects due to the interaction of the dark particles with the detector.

Today, the issues concerning the identification and the direct observation of dark matter have not yet been fully clarified. However, it is legitimate to say that, thanks to the introduction of dark matter, a version of the standard model was developed and improved over the last twenty years of the last century, a version that until recently seemed to be able to explain all the observations concerning the present state of the Universe, and even to match quite well with the predictions of the inflationary scenario. In fact, the presence of dark matter allowed the computation of the small anisotropies of the cosmic radiation in good agreement with the precision measurements that have been made, starting with the COBE experiment, since 1992.

READ:  Admire the magical beauty of the Rho Ophiuchi star system through the eyes of the James Webb . telescope

But in this situation of idyllic agreement between theory and observation, a storm was already on the way. In the meantime, a set of data and observations were being collected, revealing a rather sharp contrast with this scenario. These contradictions were something of a bomb shell during the years 1997–1998, producing a crisis with the standard dark matter scenario. In fact, according to those observations, not only baryons, but even dark matter (that should in any case be present within all galaxies with a density one hundred times greater than the baryonic density) cannot account for the cosmic component that is dominant today on large scales!

What are the revolutionary observations that lead us to modify the standard assumptions about the current Universe so drastically? They are similar to (but more accurate than) the ones that, almost a century ago, allowed us to discover the Hubble law, linking the distance with the redshift of the most distant objects we are able to observe. In particular, these observations concern supernovae, huge nuclear explosions that mark the endpoint to the “ordinary” life of highly massive stars, transforming them into neutron stars (and possibly black holes).

Why do we focus on supernovae to measure the geometry of the Universe? There are two main reasons. First of all because they are highly intense sources of light, and hence can be observed at very great distances; some of them are so far from us that the light emitted from their explosion reaches us after traveling for a time comparable with the present Hubble time 1/H0, whence their light reaches us from regions of space located very near to the present horizon. The other reason is that their intrinsic luminosity (also called absolute luminosity), i.e., the amount of light (and energy) they emit per unit time, is relatively well known, and should only depend on the considered type of supernova. (The particular class of supernovae analyzed for those observations is called type Ia.) Actually, knowing their absolute luminosity, it is possible to express their apparent luminosity, i.e., the amount of light that reaches us, as a function of their distance or, better, as a function of the redshift z suffered by the light of those supernovae during its journey, due to the expansion of the Universe (see Chap. 2). The observed data can then be used to construct what is known as the Hubble diagram, which provides the apparent luminosity of the observed supernovae as a function of their redshifts. And herein lies the crucial result from these observations.

READ:  Virgin Galactic aircraft carrying passengers to space successfully 'travels'

In fact, if we compute the redshift as a function of the distance using the equations of the standard model, we obtain a relation which is linear only to the first approximation – for z sufficiently less than one – in agreement with the well-known Hubble law. In general, the relation between redshift and distance is nonlinear, and depends on the geometry under consideration. The geometry itself depends in turn on the amount of matter and energy present, and on how these sources warp the space-time. Thus, if we draw a graph of the apparent luminosity (or rather the apparent magnitude, to use the astronomers’ jargon) versus z, we have a number of different possible curves, each of them corresponding to different possible models of the Universe with different pressure and energy-density contents.

If we now compare these curves with the data relative to supernova observations, plotting the magnitude against the redshift, the result is that supernovae tend to align themselves along curves corresponding to a geometry generated by sources with negative pressure! Hence, the currently dominating energy density cannot be either baryonic matter or non-relativistic dark matter, since both of them give zero cosmological pressure. There must therefore be, at the cosmological level, a hitherto unknown form of energy with negative pressure, suggestively called dark energy.

The most recent observations, obtained by combining the latest supernova data2 and the CMB data of the WMAP satellite, seem to confirm that the current value of this dark energy density rep- resents roughly seventy per cent of the total cosmological energy density, while the remaining thirty per cent almost completely consists of dark matter, except for the tiny contribution due to baryons (of the order of one per cent) and the extremely small contribution associated with radiation (of the order of one ten thousandth). It is indeed fascinating to find that our Universe is almost completely made up of components that are invisible (apart from their gravitational effects).

READ:  Inflation and the Birth of the Universe (Part 2)

The above observations also tell us how large the dark energy pressure can be. Taking the ratio between the pressure and the energy density, we find that the result must be a negative number, very close to the value −1. And this yields to another very interesting outcome, already mentioned at the beginning of this chapter: the current Universe, dominated by this dark energy, must be expanding with a positive acceleration.

Indeed, according to Einstein’s equations, the acceleration with which the spatial radius of the Universe changes with time can be obtained by summing the contributions of the energy density and the pressure, and flipping the sign of the final result.

The sign flipping is due to the fact that ordinary gravity is an at- tractive force, and its action tends to slow down the expansion, producing a negative acceleration, i.e., a deceleration. However, if the pressure is negative, its contribution with the sign flipped corresponds to a repulsive force, producing a positive acceleration and a continuous increase in the expansion velocity.

The pressure contribution to Einstein’s equations, on the other hand, enters with a multiplicative factor of three compared with the contribution of the energy density (because the pressure of an isotropic fluid is equally distributed along three spatial dimensions, while the energy density is associated with the time dimension, which is unique). If the ratio between pressure and energy density is very near to −1, i.e., if their intensities are roughly equal in mod- ulus, as suggested by the data, it is then evident that the pressure “wins” against the energy density, producing an overall repulsive force which leads to an accelerated cosmic evolution.

The question which arises naturally, at this stage (and which provides the link with string cosmology) is the following: What type of matter or field does this elusive dark energy correspond to?

Is it produced by some known particle, or is it a more exotic effect emerging only on very large scales, i.e., at the cosmological level? In other words, what is the greater part of our Universe (almost seventy per cent) made of?

Despite the fact that the scientific research in this field started only very recently, there are already possible (and even plausible) answers to these questions. The first, and historically most natural, candidate to play the role of dark energy is undoubtedly the so- called cosmological constant Λ, a term introduced by Einstein in his equations just to simulate a “cosmic repulsion”. According to modern quantum field theory this term is usually interpreted as the vacuum energy density, i.e., as the energy due to the sum of all the microscopic oscillations that the quantum fields must have, even in their ground state (i.e., in their lowest energy level), due to the Heisenberg uncertainty principle. Thanks to these oscillations, even in the absence of any body or particle, the vacuum acquires an average energy density which is constant, has a negative pressure (equal in modulus to the energy density) and can, like all forms of energy, generate a cosmic gravitational field.

READ:  Detected the largest cosmic explosion in history, creating a light source 2 trillion times that of the Sun

The presence of a cosmological constant thus provides an ex- planation for the observed cosmic acceleration which works quite well phenomenologically. Indeed, the first supernova data were interpreted as evidence for a non-zero cosmological constant with a currently dominating energy density. However, there are serious formal and conceptual problems associated with this simple explanation, and most researchers are now inclined to reject it.

In fact, the cosmological energy density that we observe turns out to be considerably smaller than the typical vacuum energy computed using the current models of elementary particle physics.

Recall that the mass density of the dark energy has to be of the same order as the critical density, i.e., about 10−29 g per cubic centimeter. Why is it so small? It would be easier to explain, using a symmetry principle, if the cosmological constant were exactly zero. Instead, a small but non-zero value leads to a fine-tuning problem: an extremely accurate and unnatural adjustment of the parameters of the theory is required to obtain this value, and this issue has not yet been resolved in a fully satisfactory way.

Another open issue concerns the fact that the current dark energy density has a value quite similar to the energy density of dark matter. Actually, if the dark energy density is due to a cosmological constant, its value does indeed remain constant in time, always fixed at the value we now observe. The dark matter density, on the other hand, decreases in time, because it is inversely proportional to the expanding cosmological volume. Thus, its past value was bigger than the current value, while its future value will be much smaller. We are then led to the so-called problem of cosmological coincidence: Why are the dark matter and dark energy densities approximately equal only in the current epoch?

The Future of our Universe (Part 2)

People also ask

What are the three possible futures of the universe?

What will eventually happen to the universe?

Will the universe last forever?