Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Description: In this lecture, the professor first talked about the properties of the universe, then discussed Hubble's Law, gave an example of isotropy without homogeneity, etc.
Instructor: Alan Guth
Lecture 4: The Kinematics o...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: OK. I think it's time for us to start. Last time we talked about the Doppler shift and a little bit of special relativity. Today we'll be going on to talk more about cosmological topics. We'll be talking about kinematically how one describes a homogeneously-expanding universe like the one that we think we're living to a very good approximation.
In that case, let's get started. What I want to do today is talk about some of the basic descriptive properties of the universe as we will describe it. The universe is, of course, a very complicated place. It includes you and me, for example, and we're pretty complicated structures. But cosmology is not really the study of all that. Cosmology is the study of the universe in the large, and we'll begin by discussing the universe on its largest scales in which you view approximated by a very simple model, which we'll be learning about.
So in particular on very large scales, the universe is pretty well described by threes properties, which we will talk about one by one. The first is isotropy, and that just comes from some Greek root, which means the same in all directions.
Now, of course, as we look around say the room here, the room doesn't look the same in all directions. The front of the room looks different from the back of the room. And looking towards Mass Ave looks different from looking towards the river, and looking further out into space, looking towards the Virgo cluster, which is the center of our local super cluster, looks rather different from looking in the opposite direction.
But when one gets out to looking at things on the very large scale where in this case very large means on the scale of a few hundred million light years, things begin to look very isotropic. That is no matter what direction you look, as long as you're averaging over these very large scales, you find that you see pretty much the same thing.
This becomes most emphatic when one looks at the cosmic background radiation, which is really the furthest object that we can look at. It's radiation that was admitted shortly after the Big Bang. The history of the cosmic background radiation in a nutshell is worth keeping in mind here. I'll refer to it as the CMB for cosmic microwave background.
And in a nutshell, the things to keep in mind in thinking about this history is that until about 400,000 years after the beginning, the universe was a plasma, or maybe I should say more accurately that the universe was filled with plasma. And within a plasma, photons essentially go nowhere. They're constantly moving at the speed of light, but they have a very large cross section for scattering off of the free electrons that fill the plasma. And that means that the photons are constantly changing directions and the net progress in any one direction is negligible.
So the photons are frozen with the matter, I'll say frozen inside the matter, which means that the net velocity relative to this plasma is essentially zero. But according to our calculations, and we'll learn later how to do these calculations, at about 400,000 years after the Big Bang, the universe cooled enough so that it neutralized and then it became a neutral gas like the air in this room. And the air in this room you know this is very transparent to photons, and that means that light travels from my face to your eyes on straight lines and allows you to see an image of what my face looks like and vice versa, by the way.
And it's a little dicey to extrapolate something from the room to the universe. The orders of magnitude are very different. But in this case, the physics actually ends up being exactly the same. Once the universe becomes filled with a neutral gas, it really does become transparent to the photons of the cosmic microwave background.
So these photons have for the most part been travelling on perfectly-straight lines since 400,000 years after the Big Bang. And that means that when we see them today, we are essentially seeing an image of what the universe looks like at 400,000 years after the Big Bang. So at 400,000 years, gas neutralized and became transparent.
This by the way has a name, which is universally what is called in cosmology, nobody actually understands why it's called this, by the way, but the name is recombination. And the mystery is what the re is doing there because as far as we know, the gas is combining for the first time in the history of universe, but that's otherwise what everybody calls it.
I did actually once ask Jim Peebles who might be the person who first called it this why it was called this, and he told me that this is what the plasma physicists called it, so it was natural to just pick up the same word when he was doing cosmology, so maybe that's how the word originated. But coming from the point of cosmology, it is a misnomer in that for the theory that we're discussing the prefix re here has absolutely no business being there.
So what do we see when we look at the cosmic microwave background? We see that it is unbelievably isotropic. What we find is that there are deviations in the temperature of the radiation. The intensity is measured as an effective temperature. There are deviations in the temperature of the radiation of a fractional amount of about 10 to the minus 3, which is a very small number, but it's even stronger than that.
This deviation of one part in 10 to the 3 has a particular angular pattern, and it's not the angular pattern that you would expect if the source system were moving through the cosmic microwave background, and that's how we interpret this 10 to the minus 3 effect. Motion of solar system through the CMB. And after removing the effect of the motion.
Now actually when we move it, it's not like we have an independent way of measuring it. We don't really, not to enough accuracy. So we're really just fitting it to the data and removing it. But when we do the split to the data, it's a three-parameter fit, that is, we have three components of a velocity to fit. We have a whole angular pattern on the sky, and we only have three numbers to play with. So it's strongly constrained even though we're using the data itself to determine what we think our velocity is relative to the cosmic microwave background.
And after removing it, then what we find is that the residual deviations, delta t over t, are only at the level of about 10 to the minus 5, 1 part in 100,000, which is really unbelievably isotropic, unbelievably uniform. One time I decided to think about how round that is, how much the same in all directions it is by asking myself the question, is it possible to grind a marble that would be spherical to an accuracy of 10 to the minus 5. And you can think about that yourself. The answer I came up with was that yes it is, but it really strains the limits of our technology. It correspond to sort of the best technology we have for building highly-precise lenses basically fractions of a wavelength of light.
So to round to 1 part in 10 to the 5 is really being unbelievable round, unbelievably isotropic. And that's the way the universe looks.
Next item in our description of the universe is homogeneity. Homogeneity is harder to test with precision because it means looking out into space and trying to see, for example, if the density of galaxies is uniform as a function of distance. We always talked about as a function of angle, that's isotropy, and it's very uniform where one could make very precise statements about the cosmic microwave background. But to talk about homogeneity, one has to be able to talk about how the galaxy distribution varies with distance, and distances are very hard to measure cosmologically.
So as far as we could tell, the universe is perfectly compatible with being homogeneous, again, on length scales of a few hundred million light years, but it's hard to make any very precise statement. There is, of course, relationships between isotropy and homogeneity. Homogeneity, by the way I didn't define that. I assumed you know what it meant, but I should definitely define it.
Isotropy means the same in all directions. Homogeneity means the same at all places. So sometimes these are just put together and called uniformity because they are very similar concepts. They are, however, distinct concepts logically, and it is worth spending a little time understanding how they connect to each other, in particular how you can have one without the other is the best way to understand what they individually mean.
So suppose, for example, we had a universe that was homogeneous but not isotropic. Is that possible, and if so, what would be an example of a feature that would be described that way? Let me throw it out to you. We want to be homogeneous, but not isotropic. Yes.
AUDIENCE: It would be parallel universes like a cylinder pointing in a z direction, and I mean, matter is all homogeneous with a cylinder but there is preferred directions for isotropic.
PROFESSOR: A preferred direction fixed by the direction of the periodicity? That is an example. That's right. That's right. Let me ask if there are other examples people could think of. Yes.
AUDIENCE: There are galaxies everywhere with constant density, but they're all aligned in a particular direction.
PROFESSOR: That's right. That's right. Galaxies have a shape, in particular they have an angular momentum. The angular momentum could be a line, and that would be an example of a universe that would be homogeneous but not isotropic. Very good. Very good.
Another example that I'll just throw out, which I think maybe is simple to think about is the universe is filled with this cosmic microwave background radiation. suppose all the photons going in the z direction were more energetic than the ones going in the x and y direction. That would be a possible situation that could be completely homogeneous, but would be an example of something that would not be isotropic.
So there are many examples you can come up with. I'm very glad you came up with the ones you did. That's great. Going the other way it's harder. Suppose we try to think of the universe that's isotropic but not homogeneous. Isotropic, by the way, does depend on the observer, so let's first talk about isotropic relative to us.
I was just going to say imagining a universe that would be isotropic relevant to us, but would not be homogeneous. Yes.
AUDIENCE: Could it be like if we lived in some shell.
PROFESSOR: That's right. A shell structure.
AUDIENCE: In all direction, the shell would be there.
PROFESSOR: That's right. That's right. I think I'll even draw that on the blackboard. Example of isotropy without homogeneity. So we would be here, and the matter could be distributed in a perfectly spherically-symmetric distribution with us at the center. And that would be an example of something that would be isotopic to us but not homogeneous.
Now, things like that, of course, are considered weird because we don't think of ourselves as living in any special place in the universe, and that's basically what the Copernican Revolution was all about. And the Copernican Revolution is sunken very deeply into the psychology of scientists. So I think scientists would be very loathed to imagine the universe that look like this, but it does help to understand what these words mean.
If a universe is going to be isotropic to all observers, then it does have to be homogeneous, and that's part of the reason why we're pretty confident that our universe is basically homogeneous, because we just decided that's isotropic to us, and we decide we're not special then it has to be isotropic to everybody and then it has to be homogeneous. If the universe is isotropic to all observers, it is homogeneous
Now, a thought which I will leave for you to think about between now and the next lecture is whether or not really knowing that a universe is isotropic with respect to two observers is enough to prove that it's homogeneous. That turned out to be a more subtle question than it might sound. I don't know if it sounds subtle or not. I should maybe just tell you basically what the answer is and then you can try to think if you can understand the answer.
In the Euclidean space, isotropy about two distinct observers is enough to make it homogeneous, which is kind of what you visualize. But if you can allow yourself to think about non-Euclidean spaces, and I know we haven't talked about non-Euclidean spaces yet so you might not have in the way of tools to think about it. But think, for example, about surfaces in three dimensions. Surfaces are very good examples of non-Euclidean two-dimensional geometries. And see if you can invent a two-dimensional geometry that would be isotropic about two points, but would not be homogeneous. So that's your thought assignment for next time, not to be handed in just to be talked about in the lecture next time.
So isotropy and homogeneity are two of the key properties that define the simplicity of our universe on very large scales. The next thing I want to talk about is the expansion of the universe, which is basically characterized by Hubble's law. Last time I think I said I was going to call it the Lemaitre-Hubble law. I decided I'll probably call it Hubble's law.
Now, Hubble, I think, really does deserve credit for demonstrating observations that the law is true, and that's really what he is getting credit for and that was not believed until he discovered it. So it really did have a tremendous effect on the course of cosmology.
So Hubble's law says that on average all galaxies are receding from us with a velocity which is equal to a constant, H, called the Hubble constant-- Hubble called it K, by the way, capital K-- times the distance to the Galaxy, r. And so it's not true exactly for our universe, but it's true in some average sense, just as isotropy and homogeneity are, we're only true on an average sense.
I want to tell you about the units in which it's measured and that leads me to the parsec. Let me write this on the board. But astronomers always measure the Hubble constant or I will sometimes call it the Hubble expansion rate in kilometers per second per megaparsec. And it's a relationship between a velocity and distance, so kilometers per second is velocity and velocity per megaparsec is the velocity per distance, which is what it should be.
Notice, however, that I wrote that. A kilometer and a megaparsec are both units of distance. So they actually just have some fixed ratio. So in the end, this Hubble constant really is just an inverse time, and obviously, if you multiply an inverse time times the distance you get a distance per time, which is the velocity, so that works. But it's very seldom quoted as simply an inverse time, instead it's quoted by the units that astronomers like to use. They measure velocities as a normal person would in kilometers per second, but they measure distances in megaparsecs, where a megaparsec is a million parsecs, and a parsec is defined by that diagram.
The base of this triangle is one astronomical unit, the mean distance between the Earth the sun. And the distance at which the angles attended by one astronomical unit is one second of arc is what's called a parsec and abbreviated pc. And a parsec is about three light years. I'll write these things on the board. One parsec equals 3.2616 light years, and a megaparsec is a million of those.
Another useful number to keep in mind for converting, if you want to think of H as inverse years, then the useful equality is that 1 over 10 to the 10 years is equal to 97.8, and it's suitable to remember this as being 100-- you can look up the exact number when you need it-- and these funny things kilometers per second per megaparsec.
So what is the value of Hubble's constant? It actually has a very interesting and historically-significant history. It was first measured in this paper by George Lemaitre and in 1927, published only in French and ignored by the rest of world, at the time at least. It got discovered later. And Lemaitre was not an astronomer. He was a theoretical cosmologist. I mentioned a few times I think he had a PhD from MIT in theoretical cosmology in physics, in principle.
And the value that he got based on looking at other people's data, in 1927, had the value of-- I guess actually, I'll give you the range. He gave two different methods of calculating it. We've got two slightly different answers. So we had 575 to 625 of these [INAUDIBLE] units kilometers per second per megaparsec. And two years later in his famous paper "Hubble," got the value of 500 kilometers per second per megaparsec. I have a picture of Hubble too. Yes.
AUDIENCE: That last in the board right there where you have 1 over 10 to the 10 amperes, is that H?
PROFESSOR: That's just an equality of units.
AUDIENCE: Quality of units.
PROFESSOR: That's just the unit equality. It's relevant to H, because H is measured in those units. But it really is just an equality of units. 1 over 10 to the 10th years has units of inverse time, and kilometers per second per megaparsec has units of inverse time also because kilometers is distance and megaparsec is inverse distance. So both sides have the same units and the same dimensions, I should say, and it's just two different ways of measuring the same thing, inverse times.
So in 1929, Hubble published his famous paper which he got the value of 500, and there's an important difference really between the papers by Lemaitre and Hubble. First of all, Hubble was using largely his own data. Lemaitre was using other people's data mostly Hubble's actually. And furthermore, Hubble made the claim that the data justified the relationship that v is equal to a constant times r. Lemaitre knew that relation theoretically for a uniformly-expanding universe, which we'll be talking about shortly. But he did not claim to be able to get it from the data. The data he had he decided was not strong enough to reach that conclusion, but he was still able to get a value for H by taking the average velocity dividing it by the average distance and got a number.
I think I have Hubble's data next. Yeah, here's Hubble's data. The data obviously was not very good. It only goes up to about 1,000. One curiosity of this graph that you might notice is that the vertical axis is a velocity meaning it should be measured in kilometers per second, but nonetheless Hubble wrote it as kilometers. Not getting his units right, so minus 10 or something like that on the graded sheet. But somehow it did not stop the paper from getting published in the proceedings of the National Academy of Sciences and had become, of course, a monumentally-famous paper.
But you can see that the data is scattered, and it has those nice lines drawn through which guide your eye, but if you imagine taking away the lines, it's not that clear on the data itself that it really is a linear relationship. But it's suggested, at least, and Hubble thought it was pretty convincing and later Hubble gathered more data for this project, and it did become quite convincing that there is a linear relationship, and today there's no doubt that there is a linear relationship between velocity and distance. At very large distances there's deviations, which we can understand and we'll be talking about later, but basically, at least for moderate distances, one has this linear relationship.
I should mention that the velocity of the solar system through the CMB is also the velocity of the solar system through this pattern of Hubble expansion, and both Hubble and Lemaitre had to make estimates of the velocity of the solar system relative to these galaxies and subtract that out to get things that resemble a straight line.
Lemaitre estimated the velocity of our solar system as 300 kilometers per second, and Hubble estimated it as 280 kilometers per second. So it was a relevant feature because remember, the maximum velocity there is only 1,000 kilometers per second, so the correction that he's putting in is about a third of the maximum velocity seen. So it's a very important and not that it was easy to determine.
AUDIENCE: What were they using to determine the [INAUDIBLE] CMB?
PROFESSOR: I think they were just looking for what they could assume that would make the average expansion in all directions about the same. To be honest, I'm not sure about that. But that's the only thing I can see that they would have, so I think that must be what they were using.
Now since these ancient times, there have been many measurements of the Hubble expansion rate, and they changed a great deal. So in the '40s through '60s, there was a whole series of measurements dominated by people like Walter Baade and Allan Sandage. And generally speaking, the values came down steadily from the high values that were measured by Hubble and Lemaitre in the very early days.
When I was a graduate student, if you asked anybody what the Hubble constant was, you always got the same answer. It was somewhere between 50 and 100, still uncertain by a factor of 2, but much lower by a factor of 5 or 10 from the values that Hubble was talk about, and was still a major source of uncertainty in talking about cosmology.
Values started to become more precise around 2001. So in 2001, there was the Hubble Key Project that released these results. The word Hubble here refers to the Hubble satellite, which was named after Hubble-- Hubble, Edwin. And they were able to use the Hubble telescope to see Cepheid variables and galaxies, that was significantly further than Cepheid variables can ever be seen before and thereby make a much better calibration of the distance scale. As you'll learn about when you do your reading, Cepheid variables are crucial to determining the cosmological distance scale.
So the value that they got was much more precise than anything previous, 72 plus or minus 8 of these [? quad ?] units kilometers per second per megaparsec. Meanwhile things were still controversial. I should have added that when people said it was 50 to 100 when I was a graduate student, it wasn't that people really understood the error bars to be that large. The real situation is that there were a group of astronomers that claimed adamantly it was 50, and there were other groups of astronomers that claimed adamantly that it was 100.
Anyway On person is shouting in your ear saying it's 100, another person is shouting in your ear saying it's 50 the conclusion is that it's 50 to 100, and that's the situation when I was a graduate student. So this was a somewhat high value relative to the argument. The people who are arguing on the low side were still in business at this time and still in fact also using Hubble telescope data. So Tammann and Sandage, the same year using the same instrument-- let me put the year here, and it's 2001-- Tammann and Sandage were estimating 60 plus or minus they said less than 10%. so these didn't quite mesh.
Coming to more modern times, in 2003, WMAP, the satellite called the Wilkinson Microwave Anisotropy Probe, a satellite dedicate to measuring these minute variations of the cosmic microwave background at the level of 1 part in 100,000, it turns out that those measurements are estimated at the Hubble expansion rate by fitting the data to a theoretical model. And their initial number was 72 plus or minus 5. And that was based on one year of data.
And in 2011, the same WMAP satellite team was based on seven years of data, came up with a number of 70.2 plus or minus 1.4, so to very precise. And the most recent number comes from a similar satellite to WMAP but more recent and more powerful satellite called Planck, which just released its data last March. And it came up with a somewhat surprisingly low number 67.3 plus or minus 1.2. Yes.
AUDIENCE: The other measurements there are kind of inconsistent with one another and then with one measurement sort of 20th century make this big jump down suggesting those early guys were making the same kind of mistake. What was it?
PROFESSOR: Good question. The early guys were making a big mistake in estimating the distance scale, and I'm not sure I understand the details of that, but I think it had something to do with misidentifying Cepheid variables, equating two different types that should not have been compared with each other. But I'm not altogether sure of the details, but it was definitely the distance scale they had wrong. The velocities are pretty easy to measure accurately, and they were very wrong. Yes.
AUDIENCE: There's two types of Cepheids, one has a certain period of velocity relation that would give it, and it's like a completely different type of star, and so we got mixed up, and we got completely different absolute magnitudes, which will give you two completely different distance estimates. So I don't know how far, but measuring Cepheids and Andromeda was way off the distance scale because we thought they were Type 1, but they were actually Type 2.
AUDIENCE: I think the difference between Type 1 and Type 2 are a factor of 4, so that would make sense.
AUDIENCE: Yeah. It's like two completely different linear relations.
PROFESSOR: An intensity goes like 1 over the distance squared, so I think that, I mean, a factor of 4 in intensity I think would mean a factor of 16 in distance estimates. Yes.
AUDIENCE: I'm following so much that these are like, they both have error bars, but they're not within error of each other.
PROFESSOR: Right.
AUDIENCE: Well, this is like current data.
PROFESSOR: So what's going on? Nobody knows for sure. One thing I should mention though is that these are what are called 1 sigma error bars, which means that you don't expect them to necessarily agree. You expect the right answer to be within one sigma error bar 2/3 of the, time but 1/3 of the time it should be outside the error bar.
The questions is, the error bar is on both. But the comparison of this, and this is usually viewed as something like 2 and 1/2 sigma effect, which naively, I think, means the probability of something on the order of 1% or something like that of getting errors that large at random. And it's debated whether or not it's significant or not. It's the abstract of the Planck paper use words something like there's a tension between their value and other recent values.
When somebody does see things like that happen more frequently than the probabilities indicate, which I think it proves a theorem that experimenters always underestimate their error bars. But there's no absolute proof of that theorem. So these thing were early debatable. People don't know-- there are many things that turn up in experimental physics and especially in cosmology that turn up regularly where people have different opinions about whether or not it's pointing to something very important or something that's going to go away.
So very often they go away, that's a fact. But you never know in any one case, whether it's something important that will become more definite as for the measurements are made or whether it's just a spurious effect that will disappear in a few years. Yes.
AUDIENCE: So I imagine in the 1940s when people started saying yes, that Hubble, for whom the constant is named, was off by a factor of 10. That's very controversial. Was there any kind of sloping trend where people may have changed their data to make it seem, oh, we're not that far off the Hubble standards. Has this happened a couple of times before?
PROFESSOR: Question is did people perhaps try to fudge their data during a period in the middle to make it look more like Hubble's. I think, I don't know, and there were, as I said, pretty much through the middle of the 20th century two groups, one of which was getting a high value, and one of which was getting a low value. The high value is where, in fact, disciples of Hubble, rather directly-- wait a minute. That's not right.
The most direct disciple of Hubble was Allan Sandage, and he was, in fact, abdicating the low value. So the sociological trends are not that clear. What is clear is that they were way off. I was going to add concerning the way offness, that it really does have or did have a very significant effect on the history of cosmology because when one looks at a Big Bang model and tries to use that model to estimate when it all started, what you're doing is you're trying to extrapolate backwards, ask when was everything on top of each other given that things are moving at the speed now.
There is more that goes into the calculation then just H. It depends on your model, the matter, and things like that. But nonetheless H is obviously a crucial ingredient there. The faster things are moving now outward when you extrapolate backwards, the faster they're moving inward and the younger the universe is. And to a very good degree of reliability, any age estimate-- and we'll make age calculations later-- but any age estimate is proportional to 1 over the Hubble parameter, 1 over the Hubble expansion rate.
So if you're off by now we would say a factor of 7 between Hubble's value and the current value, 70 versus 500, if you're off by a factor of 7, you get ages for the universe, which are factors of 7 smaller than what you should be getting. And this was noticed early on. People were calculating ages of the universe and Big Bang models and getting numbers like 2 billion years instead of 14 billion years, a factor of 7.
And even back in the '20s and '30s, there was significant geological evidence that the Earth was much older than 2 billion years, and people understood something about the evolution of stars, and it would seem pretty clear that the stars were older than 2 billion years, so you couldn't tolerate a universe that was only 2 billion years old. And it led to very significant problems with the development of the Big Bang theory, and in particular, it certainly gave extra credence to what was called the steady state theory, which you may have heard of, which held that the universe was infinitely old and as it expanded, more matter was created in the steady state theory to fill in the gaps so the density of matter would be constant.
And Lemaitre himself in his 1927 paper, built a very complicated, from my standards, theory in order to get the age to be compatible. Instead of having a Big Bang model, his 1927 model was not a Big Bang model. His 1927 model started out in a static equilibrium where he had a positive cosmological constant which produces a repulsive gravity, like what we talked about in my opening lecture, balancing against the normal attractive gravity of ordinary matter, producing what was almost a static universe of exactly the type that Einstein had been advocating.
But Lemaitre's universe started out with just a slightly less mass density than Einstein would have had, so it gradually started to get bigger and bigger. The force of ordinary gravity wasn't quite enough to hold it together, but when it did that, it starts to get bigger and bigger very slowly initially and then picks up speed and allows you to have universities that are much older than what you would get in a straightforward Big Bang model.
Let's go on. What I want to talk about next is what this Hubble expansion is telling us about the universe, and I want to go through this a little bit carefully because it's a very important point even though it's possible you've already figured it out from the reading. I don't know for sure.
Naively, Hubble's law makes it sound like we're saying that we are the center of the universe after all. Copernicus was really wrong. Everything is moving away from us, so we must be the center. But that's actually not the case. It turns out that when you look at things a little bit carefully, and that's what we'll do in this diagram, if Hubble's law looks like it holds to one observer, it in fact, also looks like it holds to any other observer as long as you recognize that there's no way to measure absolute velocity.
So we think that we're at rest, but that's really just our definition of the rest frame. If we lived on some other galaxy, we would equally well attribute the state of being at rest to that other galaxy. And that's what's being shown in this picture, which I Xeroxed from Steve Weinberg's book so this might seem familiar to you if you've read that chapter yet. It shows just expansion in one direction, but that's enough to illustrate the point.
And the top diagram we imagine that we are living on the galaxy labeled A. The other galaxies are moving away from us with velocities proportional to the distance, and we've spaced these galaxies from the diagram evenly, so the other galaxies are moving away at v, and then the next one is moving away at 2v. And if we continue, it would be 3v, 4v, et cetera, all the way out to infinity.
And what we want to do in going from A to B is to ask suppose we were living in exactly this universe as described on line A. But suppose we were living in galaxy B and considered galaxy B to be at rest. So we'd describe everything from the rest frame of galaxy B. Then galaxy B would have no velocity, because that would defined the rest frame.
When you change frames, this was all done in the context of Galilean transformations. We'll build more relativistic models later. Then the context of the Galilean transformation, if you go from one frame to a frame moving at a constant velocity, the only thing you have to do to transform velocity is you add to each velocity a fixed velocity, that velocity difference between the two frames.
So to go from the top to the bottom picture, what we do in all cases is just add a velocity, v, to the left to each velocity, and that takes the velocity of v here when we move it with v to the right. When we add a v to the left, we get 0. It does the right thing there, which is what defines the transformation we're trying to make. We're trying to make the transformation that brings B to rest.
And that means that when we add v to the left to the velocity of z, where we already had a v to the left, we get 2v to the left. When we add v to the left to y, which had 2v to the left, we get 3v to the left. Going the other direction, when we add v to the left to c, which had a velocity 2v to the right, we're left with the velocity of 1v to the right, and that gives us what we have on the second row.
And if we look from the second row from the point of view of v, the galaxy is one way or each moving away from us with a velocity, v. The velocity is two way and moving away from us with velocities 2v, et cetera. That's exactly the same. So even though the Hubble expansion pattern is phrased in a way that makes it look like you're talking about yourself as the center of the universe, in fact, it does describe a completely homogeneous picture. And it's a picture that, in fact, has a very simple description. It's a picture of just uniform expansion, and I think I have my favorite, at least the best picture I've every drawn of uniform expansion on the next slide here.
The idea is that if you look at some region of the universe, the claim-- and the claim is just called homogeneous expansion-- is that each picture at successive times would look identical, but it would look like a photographic blowup. Each picture would just be a bigger image of the same picture with one important exception, and I did try to draw this correctly, the positions of the galaxies-- this little lob there is supposed to be a galaxy, by the way, in case you can't tell my great artistry. The positions of each galaxy is just expand uniformly, the pattern of positions, but each individual galaxy does not expand. The individual of the galaxies maintain their size as the universe undergoes this public expansion.
Now if we're talking about the very early universe before there were any galaxies, you would just have basically a uniform distribution of matter of gas, and that would just uniformly expand every molecule and move away from every other molecule on average. So this is the picture of Hubble expansion.
And now what I'd like to do is provide a description of how we're going to treat this mathematically. If we have this uniformly-- I'm sorry. Yes.
AUDIENCE: I'm still getting confused whether like the expansion is the galaxies expanding into the universe or if the universe itself is expanding.
PROFESSOR: Yeah. The question in case you didn't hear it was there's some confusion here about whether we should think of the galaxies as moving through space or whether we should think of space itself as expanding. And the answer really is that both points of view should be right. If space were like water, then you could imagine putting little dust in the water, little grains of salt or something you can see and see if they are carried by the water or apart.
But there's no way to mark space. It's intrinsic to the principle of relativity that you can't tell if you're moving relative to space or not. There's no meaning to moving relative to space. And there's no meaning for you to move relative to space. There is also no meaning for space to move relative to you. they do the same thing.
So you can't really tell, and both points of view should be correct. There are cases where you can tell, however, which is not locally but if, for example, you had a closed universe, which we'll talk about later how that works exactly, then you could ask does the volume of a closed universe get bigger with time as this Hubble expansion takes place. And the answer there is yes.
AUDIENCE: That would mean actually, the universe is expanding.
PROFESSOR: The actual universe is expanding. So we will normally think of it, globally at least, as the actual universe expanding. That is how we will think about it. But locally, there's not really any distinction between that and saying that these galaxies are just moving through space. Yes.
AUDIENCE: So given that the galaxies are actually two points, why can it be claimed that the galaxies themselves are not expanding?
PROFESSOR: OK. How do we understand the fact that the galaxies themselves are not expanding is what you're asking. And I'll give you a nutshell answer, and we might be talking about it more later. One should imagine that this starts out shortly after the Big Bang as an almost perfectly uniform gas, which is just uniformly expanding, everything moving away from everything else.
But the gas is not completely uniform. It has tiny ripples in the matter density, which are the same ripples that we see in the cosmic microwave background radiation today or at least the cosmic ripples that we see in the cosmic background radiation are caused by the ripples in the mass density of the early universe.
These ripples eventually form galaxies because they're gravitationally unstable. Wherever there's a slight excess of mass, that will create a slightly stronger gravitational field in that region pulling in more mass creating a still stronger gravitational field pulling in more mass, and eventually instead of having this nice uniform distribution with just ripples at 1 part in 100,000, you eventually have huge clumps of matter which are galaxies.
And as you go from this transition of things being almost completely uniform and uniformly expanding to these lumps that form galaxies, the ones that are being formed by extra gravity pulling in the matter. And what happens is that extra gravity that forms the galaxy overcomes the Hubble expansion. The matter that makes up the galaxy had been expanding in the early days, but the gravitational pull of the matter that forms the galaxy pulls it back in. So the galaxy actually reaches a maximum size and then, in fact, starts to get smaller and then reaches equilibrium, an equilibrium where the rotational motion keeps a finite size. Yes.
AUDIENCE: So the diagrams that you're applying up here that all the distance relations have been galaxies that are being preserved. Is that a potential or is it exactly [INAUDIBLE]?
PROFESSOR: Well, yeah. It's supposed to be just a photographic blow up as far as where the relocations of dots are. Yeah. I mean is that what you're asking?
AUDIENCE: Well, like will there be equal distance between galaxies as a sub [INAUDIBLE] member?
PROFESSOR: Yeah. I think the picture shows that, doesn't it?
AUDIENCE: Well, the notches basically are spaces between [INAUDIBLE].
PROFESSOR: Oh, that's right. That's right. I haven't talked about the notches yet. The diagrams are supposed to show actual physical distances. So the physical distance between this galaxy and this galaxy is a little bit there and much more there. So that's how you're supposed to interpret that picture. But what I was about to get to and you've got there so I'll continue, is that the best way to describe this uniformly-expanding system is to introduce a coordinate system that expands with it, and that's what these notches are.
The notches are artificial things that we create, and we could think of them as just being labels on a map. Once we know that the expansion is uniform this way, we could take any one of these pictures and think of it as a map of our region of the universe. And we can then get to any other picture on the slide simply by converting units on a map to physical distances with a different scale factor.
So if Massachusetts was forever getting bigger and bigger and we had a map of Massachusetts, we would not have to throw away that map every day and buy a new map. We could handle the expansion of Massachusetts keeping the same map just crossing out the place in the corner of the map where it says 1 inch equals 7 miles, and the next day 1 inch equals 8 miles, and 1 inch equals 9 miles, and 1 inch equals 10 and 1/2 miles.
So by changing the scale factor on the map and the use of the word scale factor here is exactly the same meaning as it would have for a map, you can allow yourself to describe an expanding system without ever throwing away the original map. And that's the kind of coordinate system that we will be using, and these are called comoving coordinates. And the idea here is that galaxies are at-- I'm sticking in the word approximately here because none of this is exact, but we'll be thinking in a toy model as it was exact.
So galaxies are at approximately constant values of the coordinates, and the scale factor, which means the physical distance per coordinate distance increases with time. So that describes this all-important comoving coordinate system that we'll be using for the rest of the course to describe the expanding universe. Yes.
AUDIENCE: Do we have to do anything funny to the Lorentz transform to come to the fact that coordinates are now not moving at the same velocity relative to each other?
PROFESSOR: It depends on what questions you ask. There are questions where you do have to think carefully, and we'll have one of those shortly as probably an extra credit problem. But for most things, it actually makes things very simple, and you can ignore most of the complications of special relativity. And we'll try to think as we go along where we need to worry about special relativity and where we don't, and usually we don't.
So the key relationship is that the physical distance between any two points on the map, by physical distance, I mean what it is in the real world, miles if we're talking about Massachusetts, and this is miles between the real physical points, is equal to a time-dependent scale factor times the coordinate distance.
Now, here I'm going to use conventions for defining things that are slightly different from what are often used. A common procedure where I think it's done in most of the books, is to think of both the coordinate distance and the physical distances being measured in normal distance units, meters, and then the scale factor is dimensionless, and it just tells you how much you have to blow up the map to be able to make it the physical map.
I find it significantly easier to know what you're doing as things go along to label the map in units that are not centimeters, but are what shown on the picture as notches. One advantage of that logically is it means that if you have different copies of the map that you've made on a Xerox machine with different scalings, you have a big copy of the map and a little copy of the map, that they're marked off with notches. The notches grow with the physical size of the page, the scale factor is the same no matter which copy of the map you're using.
But most importantly, it allows you to, I think, do dimensional tests. The size of the map is clearly something that's unrelated to the actual length of a meter, it's just how many units you put on your map. So there's a clear and logical separation between what is meant by a certain number of units and distance here and a real meter by any standards.
So you can keep that straight by just imagining that your map is calibrated in some new arbitrary unit which is just special to the map, and I'm going to call that a notch. So notches are just arbitrary units that you use to mark off your map. And then the physical distance is, of course, measured in meters or any other standard unit of distance.
And then the scale factor is measured in units of meters per notch instead of being dimensionless. And the basic advantage of this is that when you're all done, nothing had better have any notches in it as you're calculating something physical, that is physical why you should not depend on the size of your map. So you have a nice dimensional check to make sure that the notches drop out of any physical calculation that you try to do.
What I want to do next is to show you that this relationship leads me to Hubble's law and furthermore will allow us to figure out what this Hubble expansion rate is in terms of what the scale factor is doing. So it's an easy enough calculation if we're looking at some object out there and it's physical distance l sub p is given by that formula, and we want to know what its velocity is. It's velocity is by definition, just a time derivative of that quantity.
So the velocity of some object, some distance out in space, is just equal to d dt of l sub p of t, and that will be da dt times l sub c, because l sub c is constant. On average, all our galaxies are at rest in this coordinate system, in this expanding coordinate system.
Now this could be written in a way that ends up being more useful by dividing and multiplying by a. So I could write it as 1 over a times da dt times a of t times lc. And the advantage of multiplying and dividing by a this way is that this quantity is again just l sub p, the physical distance.
So now we say that the velocity of any distant object is equal to 1 over a da dt times the distance to that object. And that is Hubble's law, and it tells us that the Hubble expansion rate, which is itself going to be a function of time, is equal to 1 over a times da dt.
And this allows us to illustrate the unit check that I talked about for the first time. Notice that a is measured in meters per notch, so the meters per notch here cancels the meters per notch here, and you just get inverse time, and the really important thing is that the notches have gone away. And again, notches have to go away in any calculation of any physical, and that makes a nice check. And once you know how a of t is behaving, you know exactly how the Hubble expansion rate behaves. It's determined by a of t.
You might mention one notational item. Nowadays almost everybody calls this scale factor little a. In the early days, it was very first introduced by Alexander Friedmann who first invented the equation describing expanding universe in the early 1920s. He used the letter R, capital R. Lemaitre also used the letter capital R, and I guess Einstein probably used the capital R. I'm not sure. And going into more modern times, Steve Weinberg wrote a book on gravitation and cosmology which still used the letter capital R, but that was sort of the last major work that used the capital R for the scale factor.
The disadvantage of it is at the same time this capital R means something else in general relativity. It's the standard symbol for what's called the scalar curvature in general relativity. So to avoid confusion between those two quantities, nowadays almost everybody calls the scale factor little a. If you look at old A286 notes, I used to follow Steve Weinberg's textbook on gravitation and cosmology and call it capital R, but now it's hopefully all switched to little a.
OK. Next item. If we're going to understand what it would look like to live in a universe like this, we're going to need to know how to trace light rays through our expanding universe. And that turned out to be easy. If I let x be equal to a coordinate, that means it's measured in notches, and if I imagine I have a light ray moving in the x direction, I can describe how that light ray is going to move. If I can write down a formula for the dx dt. Tells me how fast in the coordinate system the light ray travels.
Well the basic principle that we're going to use here is that light, in fact, always travels at the speed of light at some fixed value c, but c is the physical velocity of light, the velocity measured in meters per second. But dx dt is the velocity measured in notches per second because our coordinate system is marked off not in meters, but in notches. And that really is very important because meters are going to be constantly changing lengths relative to our notches, and we want to keep track of things in notches so we have a nice picture within our coordinate description of the universe that we can think about.
So we're going to want to know what dx dt is, but it's just a unit conversion problem. dx dt is the speed of light in notches per second. We know the speed of light in meters per second, c. So to convert is just a matter of multiplying by the scale factor to convert the units of meters to notches. And here again it helps to have this meters versus notches because it guarantees that you can't get it wrong if you just check your units.
So this is not really a question mark. It is just c divided by a of t, the scale factor. And we can make sure we got it right by checking our units. I'm going to use brackets to indicate units of. So we're going to work out what the units are of c over a of t, trivial problem, of course, but we'll make sure we got the right answer.
The units of c are, of course, meters per second. a of t we said is meters per notch. So the meters cancel, and we get notches per second. Now, I told you that you should never get notches in the physical answer because this is not a physical answer. This is a coordinate velocity of light, so it does depend on our coordinates depending on what coordinates we've chosen. So it should certainly be notches per second because x is measured in notches and t is measured in seconds.
So we put in the a of t in the right place. It does belong in the denominator and not the numerator. Yes.
AUDIENCE: Why aren't we worrying about the fact that as the universe expands, there's also a velocity component with a light rate from its position moving according the Hubble expansion?
PROFESSOR: Right. The reason we don't worry about that is that special relativity tells us that all inertial observers are equivalent and that the speed of light does not depend on the cannon that the light beam was shot out of. So if I'm at rest in this expanding coordinate system, I'm not really an inertial observer because there is gravity in this whole system, but we're going to ignore that.
If we're really being rigorous here, we have to do the full general relatively thing. But I think the intuitive explanation is pretty obviously valid. It is, in fact, rigorously valid. If I'm standing still in this expanding coordinate system, I am an inertial observer. And if a light beam comes by me, I should measure it's B and C, no matter where it started, no matter what's happened in the past.
So the conversion between my units of distance and physical units of distance, my coordinate distances and physical distances, is just a of t. So that's the only factor that appears and this is completely rigorous. One can drive this in a more general context in general relativity. Here we're starting out with a premise that the light pulse travels at speed say if one had the full theory of general relativity coupled to Maxwell's equations we could really derive the fact of exactly how rays travel and would give us this.
So we have a very simple equation for how light rays travel. Now I want to spend a little bit of time, and this, I guess, will be the last topic I'll talk about today discussing the synchronization of clocks in this world system, in this cosmological coordinate system. In special relativity, you know that it's hard to talk about synchronizing clocks at large distances. The synchronization of clocks depends on the velocity of the observer. That was one of the principles we learned about when I wrote down those three kinematic properties of special relativity.
So in general, in the context of special relativity, there is no universal way of synchronize the clocks. You could synchronize clocks with respect to one observer but then they would not be synchronized with respect to another observer moving with respect to that observer. In this case, we have perhaps even a further complication although in the end everything is simple, but we have the further complication that the different clocks that we're talking about, which are clocks carried by these observers that are stationary in our comoving coordinate system-- clocks carried essentially by galaxies that are uniformly expanding-- all these clocks are moving relative to each other. because the Hubble expansion tells us that.
So the notion of trying to synchronize clocks seems a bit formidable. Turns out however that we can synchronize clocks and one can develop a notion of what we call cosmic time, which is the time that would be right on all these clocks where all these clocks, I mean all the clocks that are stationary with respect to the local matter, in other words stationary with respect to this comoving but expanding coordinate system.
So why can't we synchronize clocks? What we're using as our core assumption, which is what makes everything simple, is that the model universe that we're building this homogeneous and that means that what I would see if I was living in this universe would not depend on where I was. So if I were living on galaxy number one and took out my stopwatch and timed how long it took before say the Hubble parameter changed from Hubble's value to the current value say, as an example, any two numbers, if I were living any place else and timed the same thing, how long it took for the Hubble expansion rate to change from A to B, I would have to get exactly the same time interval, otherwise, it would not be homogeneous. Homogeneous means everybody sees exactly the same thing.
So we all have, no matter where we live in this universe, a common history, and that means that the only thing we don't know is how to synchronize our clocks what time on my watch might correspond to what time on your watch. But if we imagine that we could send signals or that we're some global observer watching the whole thing, then we could just tell each other let's all set our clocks to noon when the Hubble expansion rate is 500 kilometers per second per megaparsec. And then we would have a well-defined synchronization.
And once we synchronized our watch that way, if we each looked at how the Hubble expansion rate changed with time, we would get exactly the same function of time by this principle of homogeneity. None of us could see anything different as long as we're measuring time intervals, and we've fixed it so now we're measuring nothing but time intervals because we've arranged to all set our clocks to the same time at some particular value to the Hubble parameter.
So to synchronize, we can ask what are the options. I mentioned the Hubble parameter. That's certainly one parameter that can be used in principle to synchronize clocks throughout our model universe.
You might wonder if we can use the scale factor itself to synchronize times. And the answer there I would say is no because of this ambiguity of the notch. I have no way of comparing my notch to your notch. We can compare physical distances because they're related to physical properties as the size of a hydrogen atom is a certain physical size, no matter where it is in this universe.
So we could use hydrogen atoms to measure meters, and we would all be talking about the same meters. And we could use those meters to define seconds by how long it takes light to travel through a meter, and so on. So meters and seconds, we can all agree on because they're related to physical phenomena that we can all see and that will be the same everywhere in our homogeneous model universe. But notches, not so, everybody gets to make up his own notch. It's just the size of the map he happens to draw.
So we cannot compare scale factors and say, we'll set our clocks to a certain time when both of our scale factors have a certain value. We would get different synchronizations depending on what choices we've made about having to find a notch. So the scale factor does not work as a synchronization mechanism, Hubble expansion rates does.
Also we haven't really talked about an ideal cosmic microwave background, but we certainly talked about it, the cosmic microwave background has a temperature which is falling as the universe expands, so that could be used to synchronize clocks as well. Might mention in the last 30 seconds one interesting phenomenon. For our universe, the Hubble expansion rate is changing with time, the temperature is changing with time. There's no problem talking about this synchronization.
But if you're talking about different kinds of mathematical models of the universes, you can imagine a universe where the Hubble expansion rate is just constant, and in fact that is a space that was studied very early in the history of general relativity. It's called de Sitter space. And it's approximately what happens during inflation, so we'll even be talking about de Sitter space later in the course.
In de Sitter space, the Hubble constant is absolutely constant, so at least one of the mechanisms I mentioned to synchronize the clocks goes away. There's also, in fact, no cosmic microwave background radiation in pure de Sitter space, so that goes away. You could ask, is there anything else, it turns out there is not, so you really can construct a well-defined model of the universe, the so-called de Sitter space, where there really is no way of synthesizing clocks.
And you could really show that you could make transformations so that if you synchronize the clocks one way, you could make a symmetry transformation if we take all those clocks out of synchronization and otherwise the space would be just as good as what you started with. So the synchronization is subtle, and it depends on having something which actually changes with time, but that will be the case where our real universe and for the model universes that we'll be talking about.
So I'll stop there. See you folks on Thursday.
Free Downloads
Video
- iTunes U (MP4 - 177MB)
- Internet Archive (MP4 - 177MB)
Subtitle
- English - US (SRT)