Stay in the Loop

We are thrilled to extend a warm welcome to you as a valuable member of our vibrant crypto community! Whether you're an experienced trader, a crypto enthusiast, or someone who's just getting started on their digital currency journey, we're excited to have you onboard.

Read & Get Inspired

We're delighted to have you here and embark on this exciting journey into the world of Wikibusiness. Whether you're a newcomer or a seasoned explorer in this realm, we're dedicated to making your experience extraordinary. Our website is your gateway to a treasure trove of knowledge, resources, and opportunities.

PrimeHomeDeco

At PrimeHomeDeco, we believe that your home should be a reflection of your style and personality. Our upcoming website is dedicated to bringing you a curated selection of exquisite home decor that will transform your living spaces into elegant sanctuaries. Whether you're looking to revamp your living room, add a touch of sophistication to your bedroom, or create a cozy and inviting ambiance in your dining area, we have just the right pieces for you.

Is the Hubble tension real? Exclusive interview with Wendy Freedman


Sign up for the Starts With a Bang newsletter

Travel the universe with Dr. Ethan Siegel as he answers the biggest questions of all.

One of the greatest discoveries in all of modern science was that of the expanding Universe. It led us to the notion of the Big Bang, gave us insight into our cosmic origins and ultimate fate, and helped us ultimately discover the unexpected existence of dark energy. While the quest to measure the expansion rate appeared to reach a satisfying conclusion in the early 2000s, improved, more precise data has instead revealed a conundrum for modern cosmology: the two main methods of measuring the expansion rate lead to different, mutually incompatible results for just how quickly the Universe is expanding.

Many — including me — have wondered whether this was a real problem, or whether it was just the result of unaccounted-for errors and uncertainties. Recent advances in astronomy, including type Ia supernova studies, nearby parallax measurements, and de-crowding of star fields in the JWST era, have only strengthened this incompatibility. What was initially called the Hubble tension has now, in 2025, been called a full-blown crisis from many in the field, as the discrepancies have surpassed the 5-sigma “gold standard” threshold for discovery. If true, perhaps there’s some sort of new physics at play in the Universe.

But not everyone agrees that the results are as robust as those researchers claim. Leader of the CCHP (Carnegie-Chicago Hubble Program), Wendy Freedman, is perhaps the most vocal critic of these claims, and is instead much more cautious with her conclusions. Is she correct, or is she being overly cautious? I had the chance to speak with her in this exclusive interview for Starts With A Bang, transcribed and reproduced here (with illustrations inserted by me) in full.

A graph comparing the Hubble constant (H0) from various measurements, with error bars. The x-axis ranges from 66 to 74 km/s/Mpc. Methods include CMB, DESI, SDSS, and SH0ES.

A large class of early relic methods, involving either the CMB and/or BAO (with a specific focus on DESI publications), all favor a Universe expanding at ~67 km/s/Mpc. Although there are a few groups that have outlier values for distance ladder measurements (including the CCHP group, shown as the second-from-bottom point), the strongest measurements, from the SH0ES and Pantheon+ collaborations, for instance, favor a value of ~73 km/s/Mpc, as shown here with smaller error bars. The two sets of values disagree at more than 5-sigma significance.

Credit: DESI collaboration, arXiv:2404.03002, 2024

Ethan Siegel (ES): Thank you, Wendy, so much for taking the time to talk with me. I know you’re very busy, you’re a very high-profile person, and I’ve been following your work since I first started as an astrophysicist. I began going to grad school in 2001, which is the same year that the key result from the HST key project came out. And it’s kind of interesting, right? Because when that came out, that in itself was revolutionary. That was something where, you know, after decades of having teams arguing the Hubble constant between 50 and 55 (km/s/Mpc), or the Hubble constant is more like 90 or 100, the results came in. And at that time you said, like, hey, look, the Hubble constant is 72 with an uncertainty of about 10%, and everyone who was convinced it was either 50 or 100 is wrong. Can you talk a little bit about what it was like to make that discovery and lead that team, and then, you know, what was the sort of response from the community that you received?

Wendy Freedman (WF): Yeah, it was a long effort, and it was a culmination of a lot, I think, of progress starting with the availability of CCDs, which made a huge difference, and which, of course, flew on Hubble. And I think, you know, our goal was to make a measurement to an accuracy of 10% and to not just use a single distance indicator, but to use many, and to get a more robust estimate of the overall systematics. And so a lot of work we had done from the ground before that was developed to try and get around and correct for—minimize—the types of systematics that had really plagued the distance scale. So things like dust, and the availability of CCDs, and the possibility then of making multi-wavelength measurements, which is something that I had done back in my thesis, which for the first time allowed us to correct for the dust, which was one of the big systematics that people had just been unable to deal with before that.

So the difference between 50 and 100, first, was hugely frustrating because we had these little h’s [ES note: little “h” is the Hubble constant, or the expansion rate of the Universe in km/s/Mpc, divided by 100] —which you probably remember—which, you know, values between 0.5 and 1. There were so many things that we just couldn’t determine very accurately because we didn’t know the value of the Hubble constant to within a factor of two. So the response of the community was, I think, really very positive. And I think we had done so much to try and quantify what the remaining uncertainties were that I think it just became… immediately, I think, accepted. And for a very, very, very brief time, it looked like cosmology was without problems.

And then WMAP followed soon thereafter, and there seemed to be agreement between the microwave background estimate that you infer for the Hubble constant and what we had measured locally. So in this brief period, we didn’t have controversy in cosmology, which, as you know, was short-lived. And we’re living through another period of that now. But each time we learn something new. And I think there were a lot of things that fell into place around that time, 2001. You know, one of the difficulties with having a somewhat high value to the Hubble constant—you know, 70 was still high [ES note: still gave you a low value for the total age of the Universe] compared to the age that you would determine if you compared it with the ages of globular clusters—and that was one of the real problems before. Why some people didn’t like a high value of the Hubble constant was that you ended up with the Universe being younger than the objects in it, which was a problem. But with the discovery of the acceleration and the cosmological model pieces of ΛCDM that were coming together really did stick, and I think there was that immediate sigh of relief on the part of the community that we had solved the factor of two debate.

Scatter plot showing historical Hubble constant (H0) measurements from 1965–1990, with data points for De Vaucouleurs, and Sandage & Tammann indicated by blue triangles and green diamonds.

These data points, sorted by year, show different measurements of the expansion rate of the Universe using the cosmic distance ladder method, with the data points falling into to main groups: one clustered around 50 km/s/Mpc and one clustered around 100 km/s/Mpc. The results of the Hubble Key Project, released in 2001, are shown with the red bars.

Credit: J. Huchra, 2008

ES: I remember those days well, because that was when I was starting my astrophysics career. So when the data came in and people were still using “little h squared,” I would just take “little h squared” out of it and put in ½, and this made a lot of people very upset. But also it made some people just smile knowingly, like, oh no, that’s what it is. You know, when we were living in the WMAP era, I remember there were these big degenerate plots that would come out that would show sort of, you know, if [the matter density] is this, and [the dark energy density] is this, then, you know, here’s the allowable parameter space. And as you look along that line, you’d have like, oh, and at one end where you have high dark energy density and low matter density, that corresponds to what you need if you have a value of the Hubble constant that’s at one extreme. And if you have sort of the other end of the curve, you have a value of the Hubble constant at the other extreme.

And I remember when Planck came out with their data, right, they said, oh no, we’re up at the one extreme. We’re up at the one extreme where we have—like, that’s what it is—the Hubble constant is the extreme value on the low end of 67 km/s/Mpc, with an uncertainty of like plus or minus one km/s/Mpc.

And I had thought that when that came out, that was the answer. If only that were the end of the story. Can I ask you what your feeling about that was when that came out? I know this is taking us back like ten, twelve years ago, but I think to advance the story, I’d love to sort of go chronologically like that and ask you, as someone who was actively involved at that time, what were your thoughts when we achieved that milestone?

There are many possible ways to fit the data that tells us what the Universe is made of and how quickly it’s expanding, but not all combinations of values are admissible. Here, the graphs show dark energy density (y-axis) and matter density (x-axis) with the color-coding showing which values of the expansion rate will ensue as a result. Note that large dark energy densities and small matter densities lead to higher values of the expansion rate, while less dark energy and more matter leads to lower such values.

Credit: Planck Collaboration; Annotations: E. Siegel

WF: I think, you know, looking at the Planck data, you look at the acoustic oscillation spectrum, and it’s just beautiful. I mean, WMAP was the first step in it. You know, for what it could do with the first and second and third peaks—it was also amazing. And I think my feeling—and I still have this feeling—is that they have achieved an accuracy, or at least a precision based on a model, that is something we haven’t yet achieved in measurements of the local Universe. And so their value, 67.4 ± 0.5, is better than 1% precision—for a model, which is ΛCDM, which we have to test.

Right? I mean, the beauty of that measurement is that it sets this up as a test of, okay, is ΛCDM the correct model, or is there something missing from it? And how do you figure out if there’s something missing from it? You would make a measurement locally and see whether, given that model—which is a predictive model that tells you what the expansion rate should be today—do those match?

And so, given the uncertainties in the local distance scale and the fact that you’re using such different methods for making these measurements, or inferring this from the model and measurements from the microwave background, the fact that they agreed to better than 10% is—it’s somewhat amazing. I mean, it didn’t have to happen that way, right? When you think about the evolution of the universe that we’re witnessing, there are a lot of pieces that we don’t yet understand, having to do with what is the dark matter, what is causing this acceleration. And yet these two really different methods are giving values that agree to better than 10%. So my first reaction to that was, this is amazing. And I still think that.

I think the onus is on us. I think that Planck has set the bar extremely high. And if we’re going to really test this model, then we need to be able to make measurements at that level—1% level—to convince ourselves. You know, extraordinary results require extraordinary evidence. And so we need to understand the significance of any difference. Certainly the history of the subject keeps us cautious, and should keep us cautious, because there are a lot of gotchas and potential unknown unknowns that could still be contributing to systematic effects. So the onus really, again, is on us to try and show that we’ve eliminated systematics and that we really can provide a check at the 1% level of the Planck results. So I haven’t changed my feeling about that. I think it’s really important that we do that.

constraints dark energy omega matter lambda

Three different types of measurements, distant stars and galaxies (from supernovae), the large-scale structure of the Universe (from BAO), and the fluctuations in the CMB, tell us the expansion history of the Universe and its composition. Constraints on the total matter content (normal+dark, x-axis) and dark energy density (y-axis) from three independent sources all appeared to converge onto a single value in 2010 or so, but recent endeavors have revealed slight inconsistencies.

Credit: Supernova Cosmology Project, Amanullah et al., ApJ, 2010

ES: I like that line of thought too. For me, it’s when you have many different independent lines of evidence that all converge onto the same answer, that you can be confident that your picture, your model of the Universe, is accurate. And this, to me, is pretty interesting because we talked about what the Hubble constant is, and how you measure it from the cosmic microwave background. And, you know, to not 10% precision, but 1% precision. So I think one of the big goals of the observational cosmology community has been to say, well, using the classical distance ladder method—where we start locally and say, oh okay, I’m going to look at individual stars within the Milky Way that I can measure, and I’m going to get their distances directly from stellar parallax from—like, now we have ESA’s Gaia mission and that nails them, and so that’s really great.

And then you say, okay, I’m going to look for these same types of stars that I find in the Milky Way in other nearby galaxies. Because I understand how these stars work, and I know the brightness-distance relationship of the Universe, I’m going to say, “how far away are these galaxies?” And then I look for some other property of these galaxies, like related to their rotation, or their surface brightness fluctuations, or maybe they have type Ia supernovae within them, that I can then build out this distance ladder from rung to rung to rung, and go measure the Universe and see if I get a result that agrees. And most of the results that have come out don’t agree with the Hubble constant you get from Planck or the CMB.

You’ve been involved in that sort of conversation in the community since they’ve been having it. So I’m really curious what your general take on that approach is. And then as a follow-up, if you have any special problems or puzzles that you think are important to focus on, can you talk about those?

A series of different groups seeking to measure the expansion rate of the Universe, along with their color-coded results. Note how there’s a large discrepancy between early-time (top two) and late-time (other) results, with the error bars being much larger on each of the late-time options. Although these two classes of measurements give incompatible results, no one knows the resolution to why the Universe appears to expand differently dependent on the method used to measure the expansion.

Credit: L. Verde, T. Treu & A.G. Riess, Nature Astronomy, 2019

WF: Yeah. So, you know, as I said, and we did in the Key Project, and it’s been a theme of my work all along, is to have many different ways of coming at the problem. And I think we will not get a handle on overall systematic uncertainties unless we do that. Because any individual method will have its own systematics, and there’s no way—you can make a measurement over and over and over again using the same technique, and you’re never going to discover what the systematics are. And so, this is one of the—so in my current programme, our Chicago-Carnegie Hubble Program, is using JWST to use three different methods:

  • Cepheids, which you know the classic since Leavitt,
  • the tip of the red giant branch, which we’ve been working on for many years,
  • and a new method using carbon stars that we call the JAGB method, developed by Barry Madore and me,

to use for extragalactic distances. It was actually developed by others in the Large Magellanic Cloud.

So, measuring the distances to the same galaxies using these three different techniques to try and understand: how well can we measure the distances to individual galaxies? That’s the second rung of the distance scale. And as you alluded to, we need methods that are nearby, like stars in the Milky Way, for which we can measure parallax. Then, in the Large Magellanic Cloud and the Small Magellanic Cloud, we have detached eclipsing binary measurements, which are also geometric. And then we have this maser galaxy, NGC 4258, which has a supermassive black hole surrounded by these water megamasers there in the centre—water masers that are in Keplerian rotation around it—and offers another geometric means.

So, we have a very small number of calibrators that set the absolute distance scale. Then we move out to the realm where we have the Hubble Space Telescope and James Webb, that can also make measurements of these stars that we can calibrate using these geometric techniques. And then those galaxies in the middle rung have Type Ia supernovae in them, say, and then we can step out into the far-field Hubble flow. And so again, I think we need to keep in mind that if we’re going to make a measurement to 1%, then every step along the way we better have errors that are less than 1%, if cumulatively we’re going to have a total error of 1%, which is the bar again set by Planck.

cosmic distance ladder

The construction of the cosmic distance ladder involves going from our Solar System to the stars to nearby galaxies to distant ones. Each “step” carries along its own uncertainties, especially the steps where the different “rungs” of the ladder connect. However, recent improvements in the SH0ES distance ladder (parallax + Cepheids + type Ia supernovae) have demonstrated how robust its results are.

Credit: NASA, ESA, A. Feild (STScI), and A. Riess (JHU)

Now, right now—you made the statement that most of the measurements of the Hubble constant are coming in high and not agreeing with Planck—and I’m not sure that the data are actually showing that. I think many of the methods, first of all, have very large uncertainties compared to Planck. They’re more at the 4 or even 5% level. And then, many of them depend on, for example, calibrations that have already been done. So distances—if they’ve been calibrated by Cepheids that were giving a higher Hubble constant—then they also will reflect that higher Hubble constant.

We have things like masers, and unfortunately, the Universe hasn’t been kind to us in the availability of galaxies that are close enough and that have these amazing objects. So there are only five of them beyond NGC 4258—the calibrator—to measure the Hubble expansion. And so, you need a model of the velocities, because they only go out to 100 megaparsecs. So again, that leaves you with a larger uncertainty, which is consistent with the Planck measurements.

If you look at the cumulative numbers of papers that have measured the Hubble constant as a function of time and you see the various values in the literature, it’s not true that most of them are high. They are all over the place. And in fact, it’s the measurements that are high which could be an indication that there’s something systematic, potentially with the calibration.

A 2023-era analysis of the various measurements for the expansion rate using distance ladder methods, dependent on which sample, which analysis, and which set of indicators are used. Note that the CCHP group, the only one to obtain a “low” value of the expansion rate, is only reporting statistical uncertainties, and does not quantify their systematic uncertainties at present. There is overwhelming consensus agreement that the expansion rate is around 73 km/s/Mpc using a wide variety of distance ladder methods.

Credit: L. Verde, N. Schoeneberg, and H. Gil-Marin, Annual Reviews of Astronomy and Astrophysics (accepted), 2023

ES: Can I ask you a clarifying question here? Because you’re saying that it’s only the Cepheid papers that give you a high Hubble constant using the distance ladder, but I don’t think that’s true—even in your own work, which has, to my knowledge, the lowest value of a Hubble constant that’s come out in recent years from a distance ladder method of just right around 70 km/s/Mpc. That’s still high compared to Planck. It’s outside of the error bars of Planck. Yeah, most of the community is getting results that are more around 73, 74, and a few even get as high as 76. But if you get lower, like 72, as far as I’m aware, there isn’t really any room for Planck to push the Hubble constant up into the 70s. So, would you agree that all of the distance ladder methods, including your own, are at least favoring a Hubble constant of 70 or higher, whether they use Cepheids or not?

WF: Yeah, I would agree with that statement. But I would say that I think, you know, most of the results are varying between 66 and 76, say, and so I wouldn’t say that most of the Hubble constant values are high. That’s the part where I would say…

ES, interrupting: Do you have an example, other than Planck—like CMB or BAO—that starts at that same early relic position that gives a value below 70? As far as I’m aware, there aren’t any modern, in the last five years, distance ladder papers that give a Hubble constant of below 70. Can you point to one?

WF: Yeah, I’m going to just see if I can quickly bring up—here, and I’ll show you what I’m talking about.

Scatter plot of Hubble constant (H₀) values by year of publication, highlighting Cepheid-based measurements in red and other published values in blue from 2000 to 2025.

This graphs shows the number of papers that have measured the Hubble constant as a function of time, with a variety of Cepheid measurements highlighted with the red points.

Credit: J. Huchra/I. Steer (unpublished), private communication

WF: There we go. So this was a database that was started by John Huchra when he was still alive, of all published Hubble constant measurements…

ES: Oh, but this doesn’t break down whether this is distance ladder or non-distance ladder. So you’re including non-distance ladder measurements in this graph? This includes BAO measurements and CMB measurements too.

WF: What I’m saying is, there’s not a bimodal distribution of measurements here. I agree with your statement that, you know, most of the local ones are getting 70 or higher. And where I’m making the distinction is that most of those measurements don’t have anywhere near low uncertainties, as 1%. They’re more like three, four, or five percent—or higher.

ES: Right. But if we look at the ones with the best uncertainties, aren’t they giving us that story that I just relayed to you? If I looked at, let me give you some examples, let’s look at the Cepheid graphs, or let’s look at the tip of the red giant branch graphs, or let’s look at the JAGB graphs, or let’s look at the Mira variables—from those, those all give values of between, I’ll say, 70 and 76.

WF: Okay. But all I’m saying again is that those are nowhere near 1% uncertainties.

ES: Yeah, I would agree with that.

WF: And I think that right now… Okay, so, if you take a note of 70 and an uncertainty of 3%, which I think is a conservative uncertainty—and I think the claims of 1% are, at this point, hard to justify.

ES: You resort to the, you know, there must be unknown unknown systematics in there. Is that your sort of line of thinking about why the claims of 1% are no good?

expansion of the Universe

Back in 2001, there were many different sources of error that could have biased the best distance ladder measurements of the Hubble constant, and the expansion of the Universe, to substantially higher or lower values. Thanks to the painstaking and careful work of many, that is no longer possible, as errors have been greatly reduced. New JWST work, not shown here, has reduced Cepheid-related and period-luminosity errors even further than is shown here.

Credit: A.G. Riess et al., ApJ, 2022

WF: I think it’s one way you could say that. I think the other is that, if you look at—okay, so just to take an example, in 2019 we did this study with Hubble to measure the tip of the red giant branch. And we did that in galaxies that had type Ia supernovae, and we could compare with the Cepheid distances to those galaxies. And there was a big disagreement at the 3% level—it was 0.06 magnitudes. And the more recent measurements of the Cepheids and tip of the red giant branch—so now we have JWST also, and, you know, there was reanalysis, there was Gaia data, there was reanalysis of the Hubble data—well, the Cepheid distances came into agreement with the tip of the red giant branch. They now agree extremely well. They’re at the 1% level or so, say, 1.5%.

So the distances changed. But—and if that correction to the Cepheid distances had been made, you would get a Hubble constant of something like 71. Now at the same time, the apparent magnitudes of the supernovae changed. And so they compensated for the differences in the Hubble constant. And so the Hubble constant stayed at 73 based on Cepheids. But in each case, the change was the size of the quoted error bar.

So you could say, okay, you can’t correct for unknown unknowns, and, you know, you can’t do anything. But you can look—you know, it’s sort of Bayesian philosophy, right? Each time you learn something, you can see that you’re not yet in an era where you’re looking at 1%—you’re changing by large steps. And while you’re in that phase of changing by large steps, it’s probably an indication that there are systematics still that need to be accounted for.

And so, you know, this question of ‘is it five sigma?’ We’re looking at an uncertainty on an uncertainty, and we’re talking about a one-in-two-million chance, if it’s five sigma, that this happens by chance. And when you see excursions that are this large, it’s hard to look at a statistic that says, this is a one-in, you know, 1.7 million, or whatever the five-sigma definition is, and say that you’re there yet. That’s what I’m saying.

A graph illustrating the tension between JWST and Hubble in terms of different types of work.

By enabling a better understanding of Cepheid variables in nearby galaxies NGC 4258 and NGC 5584, JWST has reduced the uncertainties in their distances even further. The lowest points on the graph show the estimate for the distance to NGC 5584 from the expansion rates inferred from the distance ladder (left side) and what’s expected from the early relic method (right side). The mismatch is significant and compelling, and the uncertainties are tiny compared to the differences between the two methods.

Credit: A.G. Riess et al., ApJ submitted/arXiv:2307.15806, 2023

ES: Well, I do think it’s important to quantify your systematics. And in fact, I remember from your paper that came out last year—which was the first Hubble constant from the JAGB method paper that I remember—I think you had about ten galaxies on the second rung that you had measured JAGB stars in. And, you know, you reported, “Here’s the Hubble constant we get, here’s our statistical uncertainty,” and you didn’t have an associated systematic uncertainty with it. Now we come to today and you have more galaxies that are selected with HST and JWST data. I think you’re up to 24 galaxies in your latest paper.

I wanted to ask you, before we get into the methods and the analysis of this, my general understanding is the way the distance ladder works in general—when you get to the second rung—is you want a large number of galaxies that have both the type of star you’re looking for and also have something about them that you can use to go more distant into the universe. When I look at the galaxies that have, for example, a type Ia supernova in them, and also that have the HST or JWST data that would allow you to do the JAGB method, I don’t get 24 galaxies you’ve chosen—or rather, I do get them—but I also find that there are 11 other galaxies you could have used.

Can you talk about why you chose the 24 you chose and not the other 11 that could also have been included in this analysis?

cepheids and SN ia together

As recently as 2019, there were only 19 published galaxies that contained distances as measured by Cepheid variable stars that also were observed to have type Ia supernovae occur in them. We now have distance measurements from individual stars in galaxies that also hosted at least one type Ia supernova in 42 events, 35 of which are independent galaxies with excellent Hubble imagery. Those 35 galaxies are shown here.

Credit: A.G. Riess et al., ApJ, 2022

WF: Alright, I’m not exactly sure which 11 you’re talking about, but let me tell you what led to the ones that we chose. Our proposal—the sample selection—was the easiest, nearest galaxies in the sample for which we could make measurements of Cepheids, tRGB, JAGB stars. And then we were looking specifically to avoid the effects of crowding—you know, serious effects of crowding—and we didn’t want to go out to the distant sample. So that’s where we started. That was the basis of the choice.

Then, so many of the galaxies that have contributed to this 24 also have HST observations. These are also relatively nearby galaxies. And so these are the ones that so far have had—and we also went to the archives—data that were available in the archive. So, there are some galaxies for which SH0ES had obtained observations, and we independently measured those, and those went into this analysis. So that’s what we’ve put into the current paper.

ES: The reason I ask that—because I think this is really important, especially if I’m going to be presenting this for other people to understand—is: when you are taking any sample of galaxies and you’re saying, ‘I’m basing my analysis on this sample,’ you need to make sure you’re using a fair sample. That you’re not saying, like, okay, I know for the galaxies that I’m going to get a spread. You know, for any one galaxy, I can’t use one galaxy to measure the Hubble constant—that would be crazy—because galaxies follow a distribution in terms of any sort of set of properties you want to look at. And so I know that if I said for the smallest galaxy, for the lowest Hubble constant galaxy in your sample, if I used just that one galaxy, I would get something below 60. And if I used just the one galaxy that would give the highest Hubble constant, I would get something above 80.

My worry is that by having such a small number of statistics that don’t include the full suite of galaxies you could be using, are you biasing yourself by selecting the low Hubble constant wing of the distribution and deselecting the higher Hubble constant wing of the distribution? Because the other papers that I’m looking at, they use larger samples. And the galaxies that you’ve included are also in their samples. But the ones that you haven’t included are the ones, in fact, that seem to be biased towards higher Hubble constants.

Bar graph comparing H0 values from SN Ia subsamples, highlighting CCHP-selected and not-included samples, with annotation on JWST contributions by Wendy Freedman and a note on sample completeness amid the ongoing Hubble tension.

This chart shows the 35 possible galaxies to choose from that have resolvable stars (Cepheids, tip of the red giant branch, or JAGB) and also were host to at least one type Ia supernova. The light red galaxies show which galaxies were included in the CCHP results; the dark red were excluded.

Credit: A. Riess, CMB@60 Meeting, 2025

WF: Well, okay, let’s unpack this. So, I told you what led us to the sample that we’ve analyzed, and that is to go for the closest sample that will be least affected by crowding effects. And we’ve included every one of the galaxies that are available on the archive to do that. There are other published measurements—we didn’t have access to those data yet. When they become public—and many of them have now, in the last month—we’re going to be analyzing those. That’s coming.

But I can tell you exactly how we chose our sample: they’re the nearest ones, and least likely to suffer from crowding. Now, there is a study out there that says that crowding has been eliminated at an eight-sigma level. I don’t even know what that means in terms of billions—that, you know, the chance that there’s no crowding.

cepheids jwst hst NGC 4258

This image shows several Cepheid variable stars with different periods within nearby galaxy NGC 4258: an important galaxy for Cepheid and distance calibrations. The bottom 6 rows show the same stars as measured by both Hubble (grey labels) and JWST (purple labels) at various wavelengths. The superior resolution in JWST images reduces prior Hubble errors by significant, substantial amounts while validating and remaining consistent with prior results.

Credit: A.G. Riess et al., ApJ submitted/arXiv:2307.15806, 2023

ES: Oh wait, I think I know that. I think I read that paper. That was the one that said, ‘We know crowding is an issue in the Hubble data’—this was for Cepheids. So we know crowding is an issue in the Hubble data. So now that we have the JWST data for those same fields, we can resolve what appeared to be unresolved in the Hubble data. We can resolve this in the JWST data. And what they found when they did the JWST data was a confirmation of, yes, what we did in the Hubble papers was correct. And I had thought that was what eliminated the crowding issue.

WF: Okay, let’s take a look at that plot. And… can you see it?

Graph comparing Cepheid distance measurements in SN Ia hosts using JWST and HST, highlighting data points, error bars, and lines from 10 to 40 Mpc to illustrate their impact on the hubble tension studied by Wendy Freedman.

This image shows the Cepheid distance measurements in host galaxies that also have type Ia supernovae within them. The dashed-dot line may be ruled out at 8.2-sigma, but the galaxies located farther away than 20 megaparsecs may have other systematics associated with them.

Credit: Screenshot from interview with W. Freedman, private communication

WF: Okay. So, here’s a galaxy. It’s at 13 megaparsecs. That’s our backyard, right? If we can’t do that with Hubble—let alone JWST—we can’t do anything, right?

ES: Well, 13 megaparsecs is pretty far for a Cepheid, right? 13 megaparsecs—that’s like 45 million light-years. And that’s kind of, you know, that’s kind of at the upper limit of what you’re able to do with Hubble. Right?

WF: So, you know, it’s maybe pushing Hubble, but Hubble can certainly do it. And JWST data have confirmed that Hubble could measure distances at 13 megaparsecs. Okay, and then we’ve got two galaxies here that are around 20 megaparsecs. Sixty percent of the SH0ES sample is at a greater distance than 20 megaparsecs. These two galaxies showed no difference in the reanalysis of the SH0ES data. They’re—NGC 5584 is a face-on galaxy, again JWST, that should be a really good measurement. You know, they were the two galaxies that agreed the best. Some of the excursions when they reanalyzed their data were 0.3 magnitudes—15% in distance.

Okay, so 40 megaparsecs—there’s still 25% of the sample that is more distant than that. This line says that, okay, if crowding increases linearly with distance—which nobody is saying would happen—then you’d have a 0.3 magnitude effect at 40 megaparsecs. But the point is, if there’s even a 0.03 average offset in the more crowded galaxies, that’s the entire size of the error bars. The entire size. So, I think it’s early to make a statement that crowding has been ruled out at an eight-sigma level. That’s my only point. No tests have yet been carried out in the more distant galaxies. Those are the ones we haven’t measured yet. I think everybody needs—we need those data to empirically determine: is that the case?

And I think there’s still a possibility that—again, if we’re going for 1%—we need to rule that out. And those are the galaxies you’re talking about. They’re more distant. We’ll see. We’ll see. So it’s just an open empirical question, but we need to address it.

ES: I think that’s pretty important too. I mean, when you’re talking about, hey, we want to use the galaxies that we have and measure their distances and measure the indicators that are in them—whether it’s Cepheids, whether it’s the tip of the red giant branch, whether it’s JAGB—you know, these AGB stars that are at a certain point in their evolution, you’ll want that. Do you feel that the community has been good in providing the raw data to everyone, available publicly, for Cepheids, for the red giant branch, for JAGB, to say, “Hey everyone, here’s the data, you can do the calibrations for yourself, you can do the analysis for yourself?” Or is some of this data that hasn’t been made public? And how do you feel about that?

A grayscale Hubble Space Telescope near-infrared image shows a circled area in NGC 7250, labeled as a Cepheid at 20 Mpc distance—a crucial object in resolving the Hubble tension highlighted by Wendy Freedman; inset includes telescope photo and text box.

This screenshot shows a single Cepheid variable star as imaged with the Hubble Space Telescope in galaxy NGC 7250 at a distance of 20 megaparsecs. While JWST may be able to resolve such a Cepheid individually, the Hubble data is much more ambiguous.

Credit: Screenshot from interview with W. Freedman, private communication

WF: I think, you know, some of it has and some of it hasn’t. I mean, we did—here’s an example of the effort—observed in the H-band, a galaxy 20 megaparsecs away, right? I mean, that’s not, completely, you know, put your heart at rest that you made a measurement at 1%. And like I’m saying again, 60% of the sample is more distant than this. And most of the measurements depend on H-band data. So, we asked, could we have the sample of Cepheids before there had been, you know, this elimination of the outliers? And the answer was, “It’s in the archive.” So that’s what we’re doing—we’re reducing the data in the archive. That data was not made available.

Now, I think there’s real value in having independent groups reduce the same data; that I think is perfectly reasonable. We don’t do enough of it in science in general. And, you know, a lot of studies have used the data that have already been published and say, “We’ve analysed it a different way, hey, we get the same answer.” But no one has gone back to the actual, you know, raw data and reanalyzed it. And so we’re in the midst of doing that now.

And as I said, the reanalysis by the SH0ES team changed their distances. They now have distances that agree with the tip of the red giant branch. And JAGB distances agree well with the tip of the red giant branch distances. I don’t want to give the impression that we’re not making progress. I think we’re making huge progress. And on a galaxy-by-galaxy basis, we’re really narrowing in on where the differences are coming from.

And I think that what we’re learning most recently is that, okay, the distances are now agreeing, but we actually have real differences in the supernova samples. And those are turning out to be interesting. And they’re also turning out to be interesting from the point of view of the DESI studies—you know, which supernova samples give what answers. There are real challenges in the supernova data, in fitting the light curves, in accounting for the decline rate, and mass-step corrections, and the color corrections, etc. There’s still additional scatter, and there are differences in how people fit the supernovae and how they calibrate the supernovae.

Graph depicting probability density vs. Ωm with five overlapping colored curves: dashed black, blue, orange, red, and purple. Curves are labeled DESI DR1, DESI DR2, CMB, Pantheon+, Union3, and DESY5; illustrating the concept of dark energy weakening in a compelling visual format.

This figure, from the DESI collaboration’s second data release’s results paper, shows the different values of the matter density that are preferred by six different data sets: DESI’s first and second releases, the CMB, and the supernova samples of Pantheon+, Union, and DESY5. Note that BAO and supernova data sets are not really compatible with one another, and that the three different supernova data sets (Pantheon+, Union, and DESY) give wildly different results from one another.

Credit: DESI Collaboration/M. Abdul-Karim et al., DESI DR2 Results, 2025

WF: The fact that we’re now making progress on the distances is uncovering new issues in the supernovae. And so I think we have still a lot to learn. And we’re learning it. And it’s, in my view, fantastic progress. But we’re not — I just don’t think we’re there yet if we’re going to talk about 1%. But I do see this as incredible progress. At the couple-to-3% or better level, we are actually all agreeing. All the people who didn’t agree before—we do agree.

And the question is: how significant are the differences with Planck? Now, people have worked really, really hard. I mean, the theoretical community has put a huge amount of effort into trying to understand what could cause this Hubble tension. And, you know, in 2021, there were already over 1,000 papers on the arXiv trying to explain this.

ES: Well, yeah. Theorists—I’m a theorist—we get bored, right? You see something that doesn’t add up, and it’s like, “Ooh, crack in the standard model! Let’s invent something.” So, you give a thousand theorists a thousand days, and you’ll get a thousand papers, right?

early dark energy

Modern measurement tensions from the distance ladder (red) with early signal data from the CMB and BAO (blue) shown for contrast. It is plausible that the early signal method is correct and there’s a fundamental flaw with the distance ladder; it’s plausible that there’s a small-scale error biasing the early signal method and the distance ladder is correct, or that both groups are right and some form of new physics (shown at top) is the culprit. The idea that there was an early form of dark energy is interesting, but that would imply more dark energy at early times, and that it has (mostly) since decayed away.

Credit: A.G. Riess, Nat Rev Phys, 2020

WF: But they haven’t found anything that’s going to get you from 67 to 73, which is really interesting, I think. I mean, I think we have to take another look at this. Like, maybe it’s there. But if it’s there, we really need extraordinary evidence, which I will say I don’t yet see. And the other possibility is, well, maybe it isn’t there. And, you know, that’s what we need to ascertain still. We need better, smaller uncertainties and to rule out some of these potential systematics. Now, as for, okay, there’s another bigger sample—well, there’s 42 supernovae in the SH0ES sample, right? That’s 35 galaxies, you know, out to 20 megaparsecs. Again, 60% of the sample is farther away.

But okay, these other galaxies, some of which you’re talking about, have larger uncertainties. They’re more distant, right? You expect that. So, half of the weight of the sample is coming from just 12 supernovae. That’s half of the weight. Just take the straight published error bars for those galaxies. You don’t have, exactly, a large sample on which you’re basing this extraordinary claim of, you know, fundamental physics is missing. So, you can compute effectively an effective size, right? It’s equivalent to only 31 supernovae. You know, the weighting makes a difference—you don’t have equal weighting for each of those supernovae. So you don’t have that 1/√n statistical gain that, you know, this larger sample gives you.

So, if you look at our sample, we’ve got 24 supernovae. Now, the most distant galaxies in our sample—they’re not as distant. So they don’t have—even the galaxies in common—they have smaller uncertainties. They’re in the halo, they’re not crowded, they’re not having large reddening effects. So, you know, you have half of the weight coming now from 38% of the sample. So, you know, if you were going to compare the effective size of the samples, they’re not statistically so different. They’re comparable.

A horizontal bar chart compares various recent measurements of Hubble's constant (H0) in km/s/Mpc, highlighting the ongoing Hubble tension. Studies, including one by Wendy Freedman, are labeled alongside the Planck CMB value marked by a vertical band.

A compilation of distance ladder measurements of H0 in comparison to the Pantheon+SH0ES, where the third rung of the distance ladder is redone using various techniques. The legend shows the different techniques included in constructing this figure.

Credit: D. Scolnic et al., RNAAS submitted/arXiv:2412.08449, 2024

ES: Right, and I can agree with that. But I would go a little further — I want to push back on a couple of things you said. Number one, that Type Ia supernovae are the big uncertainty here — or that they’re one of the possible big uncertainties. There was a paper I read last year, I think it was by Dan Scolnic and collaborators, where they said, well, look, you can use the supernovae like Pantheon+ with SH0ES, but you could also say, “Oh, I’m not going to use optical supernovae—I’m going to use near-infrared type Ia supernovae.” Or, “I’m going to use Type II supernovae,” or, “I’m going to use surface brightness fluctuations,” or the Tully–Fisher relation, or the Fundamental Plane relation.

And if you do all of those, you know—as you said, yes, the error bars are not 1% on all of these. They’re more like between 3 and 5%, or even, in a few cases, 6 or 7%. But they all give consistent answers that, I would say, is about 74 km/s/Mpc, plus or minus two or three. It’s really hard to get down to about 67 using any of those methods.

WF: Okay, okay, I take that point—given an absolute calibration of your supernovae.

ES: Well, we can do this without supernovae, right? This is not just for supernovae. This could be for any third rung on the ladder.

WF: Okay, but pick your poison. Right?

ES: Sure. Surface brightness fluctuations, or Fundamental Plane, or Tully–Fisher. It’s not necessarily supernova-dependent.

WF: No, but the scatter in the Tully–Fisher relation is, you know, 0.4 or 0.5 magnitudes—that’s 20% in distance. You know, again, before they recalibrated the Cepheids and they weren’t agreeing with the tip of the red giant branch at that point. So it depends on what you’re taking. Surface brightness fluctuations—again, there is a steep color dependence. You have to worry about dust. Most of the calibrators right now are spirals, whereas the distant sample is ellipticals. So this color difference makes a difference.

This is astrophysics. You look at the microwave background and the sort of linear physics involved in that—it’s a beauty. It sounds somehow like it shouldn’t be that the early universe is simpler, but the physics is simpler.

The map (top) of the temperature fluctuations in the CMB from Planck, along with the temperature fluctuation power spectrum (middle) as measured. The bottom two panels show the simulated temperature fluctuations on various angular scales that will appear in the CMB in a Universe with the measured amount of radiation, and then either 70% dark energy, 25% dark matter, and 5% normal matter (left), or a Universe with 100% normal matter and no dark matter (right). The differences in the number of peaks, as well as the peak heights and locations, are easily seen.

Credit: ESA/Planck Collaboration (top/middle); E. Siegel/CMBfast (bottom)

ES: Oh, I love the linear regime. My advisor’s specialty was the nonlinear regime, and I still love the linear regime so much. It’s straightforward!

WF: Yeah. But you pay for it. And so all I’m saying is—we’re in a nonlinear regime. We’re in a regime where dust, and crowding, and other types of temperature–color effects, metallicity effects… good luck. I mean, it’s not as straightforward as lining things up on a plot. And, you know, people have different versions of that plot. And some of the values that are lower—you know, you line all these ones up that are high and it looks like it’s a done deal.

But I think—I look at the data and I, again, I would love to see new fundamental physics if we can show this. I think it’s going to be one of the most exciting discoveries of the last—whatever time period you want to pick. But if I’m going to say there’s new physics, I want to see data that’s going to convince me, that’s going to jump out at me. And five sigma is not going to be a question. It’s going to be: this is—we can’t avoid it. And I think there’s just so many things, still, [that remain unaddressed].

This graph shows how various selection choices for the sample used in your distance ladder impacts both the average values (data points) and the uncertainties (size of the error bar lines) for a variety of galaxies. Some have JWST data and some have only Hubble data; some have Cepheids and some have asymptotic giant branch stars; some were selected by the CCHP team and some were selected by the SH0ES team, etc. The smallest uncertainties come from studies that use the full suite of data. The Planck data remains with a best-fit estimate of ~67 km/s/Mpc.

Credit: A.G. Riess et al., Astrophysical Journal submitted, arXiv:2408.11770, 2024

WF: And I just want to make this point about the supernovae, which is, again, the calibration of the supernovae has changed. And all of this is happening in a way you’re not aware of it, right? I’m telling you things. You’re saying everything is high and it’s five sigma. But behind the curtain, what is happening is that things are shifting by a lot. And if we’re really at this better than 1% or 1% level, they shouldn’t be shifting by these large amounts. It’s… we’re not there yet.

So, this is the supernova sample. When I was at Carnegie, before I came to do research here [at Chicago], I was at the Carnegie Observatories, and we carried out a supernova project which was not a search—it was a follow-up. So, we weren’t looking for new supernovae, but we wanted to make sure that we could get enough data at enough wavelengths, catch the supernovae before maximum, and really observe them well, get time — spectra — that we followed the supernovae as a function of time, so we could get very accurate k-corrections. And we really worked at the calibration.

Presentation slide titled "Carnegie Supernova Project (CSP) Dealing With Systematics," featuring scientific plots, project details, and images of telescopes—highlighting CSP's role in addressing hubble tension research led by Wendy Freedman.

This image shows Wendy Freedman’s work on supernovae, including calibration and systematics, while at Carnegie, acquiring light-curve and spectral data from Las Campanas Observatory.

Credit: Screenshot from interview with W. Freedman, private communication

WF: So, this was done at Las Campanas. Three hundred nights a year at Las Campanas are clear. We have standards that the calibration of this dataset, it was intended to help us deal with systematics. We were concerned that over the ten-year course of the program, that the detectors could they could drift. So, we actually measured the absolute flux of the detectors annually. We are really confident of our zero point.

Now there is spectrophotometry — one of my former postdocs, Dr. Taylor Hoyt. If you haven’t looked at his papers, please do, because he’s gone into some of the details of this supernova calibration and comparing with Pantheon+ and the earlier supercal and our calibration. And there are issues. But their spectrophotometry confirms our zero point.

Pantheon+, when they take our data and add it in—they use 18 different surveys—add ours in, they shift our zero point. We don’t see that shifting the zero point is the right thing to do because we’re confident in this calibration. And it’s—you know, that’s an arbitrary shift. That increased the Hubble constant. I think a lot of differences are emerging between the Dark Energy Survey, the Union 3 survey, the Pantheon+, you know, all these different supernova surveys that are also being folded into the DESI results, and the BAO, having w [ES note: the dark energy equation of state] evolve with redshift.

We’re learning that how you fit these supernovae matters. Whether you SALT2 or 3, blah blah blah—there are real differences. And they’re big. And so we need to understand these differences. And part of this difference that we’re seeing—and I’m telling you, this is a well-calibrated sample—they shift our zero point. And that’s part of the reason for their higher Hubble constant.

This graph of astronomical magnitude (y-axis) versus Cepheid period (x-axis) for Cepheid variable stars in the Large (left) and Small (right) Magellanic Clouds. The stars are shown in two Weisenheit indices (top) and three Hubble filters (bottom) for each one. Once outliers (small x symbols) are excluded, the period-luminosity relations (solid lines) are derived.

Credit: L. Breuval et al., Astrophysical Journal submitted/arXiv:2404.08038, 2024

WF: So, for me, until this is settled—and I think it will be in the next few years—we’re going to have many more supernova surveys. We’ll have ZTF, we’ll have Rubin, we’ll have different groups that are looking at these supernovae and analyzing the same supernovae. We’ll get to the bottom of this.

But again, I just don’t—you know, this doesn’t seem like extraordinary evidence to me when people are moving, and people like you, who are looking at this pretty carefully, are unaware of it. Things are moving. So, until they resolve, and many groups come to consensus on this, no matter where it lands — and I’m completely open to where it will land — I just want to make sure that ultimately we have error bars that everybody can agree on, and that come to some sort of consistent, eventually consensus. But we’re not there yet. Things are moving.

ES: Well, I agree. I do see that different groups get some pretty different values using — I won’t say the same data — but depending on how the analysis proceeds. For example, your group used HST and JWST data of tip of the red giant branch stars combined with supernova data to get a Hubble constant. And you got one that was about 70. And, you know, like you say, there are going to be errors on that where you might say, sure, statistical errors, you might be able to say, okay, that’s pretty small. But systematically, now you have to deal with the recognition that you’re probably looking at a few percent at least.

I know in February of this year, there was a paper by Joseph Jensen and some other collaborators — John Blakeslee was there, Brent Tully was on it — where they used tip of the red giant branch with surface brightness fluctuations, calibrating it with both HST and JWST data. And they got a high Hubble constant — I’ll say high — of 73.8 km/s/Mpc, with 0.7 statistical and 2.3 systematic uncertainties on it.

That seems like, if you take both the statistical and the systematic errors and you take the very low end of that, they get just about the high end of what you get. If you take your sample and the higher statistical errors. Is the tip of the red giant branch something that’s well understood? Or is this more poorly understood than the Cepheids?

A scientific plot showing group CMB velocity vs. updated SBF distance and corresponding H₀ values, highlighting hubble tension as discussed by Wendy Freedman; data points are labeled

This graph shows the Hubble diagram inferred using distances derived from the updated surface-brightness-fluctuation zero point calibration from tip of the red giant branch measurements and improved optical color measurements as of February 2025.

Credit: J.B. Jensen et al., Astrophysical Journal accepted/arXiv:2502.15935, 2025

WF: No, I would argue — and, you know, you could talk to theorists like Lars Bildsten, who worked on tip of the red giant branch stars — the tip of the red giant branch is probably the best understood theoretically of any distance indicator. These are low-mass stars. They have degenerate cores: helium cores. They’re being powered — the luminosity is, they climb the giant branch — they’re being powered by a shell of hydrogen that’s burning.

And when they reach a well-determined temperature — 100 million degrees — which occurs at a core mass of just shy of half a solar mass, that temperature is enough to ignite the core via the triple-alpha process. And that’s the position of the core helium flash. It’s simple physics: nuclear physics.

Cepheids, you know, are supergiants. Their atmospheres are pulsating. We don’t know the metallicity dependence. You know, if you look at the literature, the value for the slope of the metallicity dependence, it’s wavelength-dependent, it’s whoever-does-the-study dependent. It’s not well determined. And that’s only the atmosphere. We have no understanding of the interior.

And so in terms of understanding the physics — so we talk about surface brightness fluctuations — you’re looking at not just giant branch star fluctuations, but there are also asymptotic giant branch stars. And you cannot resolve that. You’re looking at individual fluctuations in individual pixels; you don’t know what you have. It depends on the star-formation history.

You know, the beauty of the tip of the red giant branch method is that you can go out into the halo of the galaxy. You can see in the colour–magnitude diagram where are the giants, where are the asymptotic giant branch stars. It’s well-understood astrophysics. They aren’t crowded. There’s negligible reddening in the halos. These are much better understood than the Cepheids, or the JAGB stars, or surface brightness fluctuations.

I think they’re fainter, they haven’t been used as long in the extragalactic distance scale; we certainly used them in the key project. That was one of the local independent methods we used to test Cepheids, and to the better than 10% level we were able to do that then. But when I got to University of Chicago, was when we began to develop the technique to improve the tRGB distance scale so we could actually go out to galaxies that had type Ia supernovae and do the Hubble constant. But I think they’re very well understood. It’s something the community… they’re just not as familiar with them because for years, we’ve been using Cepheids since the time of Hubble. They’re referred to as a gold standard. But they are a very challenging distance indicator. And I’ve worked on them, as you know, for now going on 40 years. So, there are a lot of places where, you know — the reason for developing these other distance indicators — is lets try and understand what the potential systematics could be.

You have to determine the period. You have to be concerned about the period range over which you’re getting them—which, by the way, have changed a lot in the analysis. And these things have consequences. The tip of the red giant branch is much simpler. And the metallicity of the stars tracks one to one with the colors. If a star is at the blue end of the color range of the tip of the red giant branch, it is metal-poor. And we have spectra in globular clusters in the Milky Way where people measure detailed abundances out of high-resolution spectra. We understand the red giant branch stars. They are much better understood than the Cepheids are.

Hertzsprung-Russell diagram showing stellar classification by luminosity, temperature, spectral class, absolute magnitude, and how stars’ mass and energy influence different types.

When stars run out of hydrogen in their cores, they evolve off of the main sequence, becoming subgiants, then red giants, then igniting helium in their cores (which is the tip-of-the-red-giant branch phase), and then evolving onto the horizontal branch and eventually into supergiants (for high-mass stars) or into the asymptotic giant branch (for non-high-mass stars) before dying. The mass of a star determines its ultimate fate, but the rate of fusion is set by other internal properties.

Credit: Starhuckster.com

ES: All right, so I want to be mindful of your time. I know you’ve spent close to an hour here with me already. I have two more questions, if that’s okay.

WF: Hey, shoot away.

ES: Thank you, thank you. I know for a long time, people — including you — have been saying we need more data. If we want to know what the true value of the Hubble constant is, particularly from the distance ladder, right? We needed Gaia. And then we got Gaia. Before, when we had Hubble, we needed JWST. Now we have JWST. I’m really curious: at what point do you think we’re going to be able to say, “This data is satisfactory, and you can believe that these results the entirety of the distance ladder community is sort of converging on are reliable?”

Is there either a specific dataset that you think is going to settle this? Or an experiment that can settle this? When do you think you’re going to be able to say the data has converged sufficiently that we can draw a conclusion about what the Hubble constant is from late-time measurements?

WF: So, I think—I have a couple of things I want to say about that. I think in the next few years we really will learn a lot. We will have additional JWST data on these more distant galaxies, some of which we were talking about that don’t yet have JWST data. Barry Madore and I also have time on JWST to use the JAGB method to step out to Coma, so we can avoid Type Ia supernovae altogether, which I think is going to be a very interesting measurement.

ES: Yeah, that’ll get you up to 55, 60 million light-years…

WF: No no, it’s at 100 megaparsecs.

ES: Oh, Coma! Not Virgo! Coma!

WF: Coma. Yes!

ES: Oh, geez.

coma cluster zwicky dark matter

The Coma Cluster of galaxies, as seen with a composite of modern space and ground-based telescopes. At 99 megaparsecs (about ~320 million light-years), the individual stars within it cannot be resolved by Hubble. But here in the JWST era, certain classes of stars, like red giants and AGB stars in the galactic haloes of these member galaxies, will indeed be able to be individually resolved.

Credit: NASA / JPL-Caltech / L. Jenkins (GSFC)

WF: Yes. So, that is going to be fun. And these stars are bright enough, we can go out again, far enough out where crowding is not going to be an issue. And I think that’s going to be very interesting. And there are even 12 supernovae in the Coma cluster, so we can also calibrate those. So that will be one thing. So not using supernovae. I think it’s important that we use, again, coming back to having different distance indicators. And as the samples increase for these various methods, these statistical uncertainties will come down.

And I think we need more cases. One of the beauties of having tip of the red giant branch stars and JAGB stars is that we can measure them in the same galaxies. We’re not looking at something like, “Here’s an SBF galaxy over here and a supernova galaxy over there.” We’re not comparing yet. We don’t have that ability to do that for the more distant galaxies. We need more of that, because that’s how we’re going to get again at systematics.

And then I would finally say that, from my perspective, we need methods that don’t depend on flux and an absolute calibration. Absolute calibration is difficult. And we haven’t even talked about, when your’re correcting for dust, what is the absolute calibration of the dust maps? We don’t expect these things to be large, but they can be, you know, systematics, they’re going to affect the calibrators potentially, and it could add up to a systematic effect in the distance scale.

So, I think things like gravitational wave sirens — you know, many of us in the community were really excited that this was going to happen more quickly. There’s, you know, the one neutron star–neutron star binary candidate object, GW170817, which, you know, serendipitously happened. Right at the beginning. It was so bright, it seemed like there were going to be many more of those. But there hasn’t been a single one.

An image of a spiral galaxy with a giant star at its center.

On August 17, 2017, the Laser Interferometer Gravitational-wave Observatory detected gravitational waves from a neutron star collision. Within 12 hours, observatories had identified the source of the event within the rather mundane galaxy NGC 4993, shown in this Hubble Space Telescope image, and located an associated stellar cataclysm called a kilonova (box), caused by the collision of two neutron stars. Note that a kilonova is only one possible origin of gamma-ray bursts, and cannot account for all of them. Inset: Hubble observed the kilonova fade (in optical light) over the course of six days.

Credit: Hubble Space Telescope, NASA and ESA

ES: No, the other neutron star–neutron star mergers that we’ve seen look like they’ve gone straight to black holes. So, there’s no additional information coming from them.

WF: Yeah, which is really disappointing. But that will happen. You know, that will happen again. It’s certainly not the only one in the universe. And I think that is important, because that’s completely different physics, completely different systematics, and not depending on an absolute calibration of flux, which I think will be important.

But I do see real major… we will have a much better understanding in the next couple of years as we better understand the supernova sample — and that’s going to be critical — and as we enlarge the JWST sample for which we do have these other measurements.

And then Gaia also, eventually, it’s still a few years away, but they will have their last data release, and there will be improvement to the parallaxes. Right now, there is still an issue that the parallaxes depend on the magnitude of the star, the color of the star, the position in the sky, etc. There’s still a lot of systematics out there that could be affecting that calibration. So that will be a big improvement.

Then, as I said, Rubin is going to have a lot of supernovae. They’ll be calibrated in a consistent way. And as long as we get enough spectroscopic time to follow those up. Even if we don’t get many nearby ones — I mean, this is one of the difficulties, right — I showed you the small sample, and we’re at the mercy again of nature that doesn’t provide a lot of supernovae that are close enough for us to measure with Hubble or JWST.

But that sample will increase. There’s even one at 80 megaparsecs now, which, you look at that one at 20 megaparsecs, again, how can you get a 1% measurement of something at 80 megaparsecs? So, we need to increase that nearby sample.

So my view is we just need to be a bit more patient, or, you know, collectively impatient. We want to see an answer. But this is an empirical question. We will get to the bottom of it. And I think we are doing that. I think the accuracy — I mean, you know where I came from, right? It was a factor of two. It astounds me now that we’re talking about a couple of percent accuracy. Groups that historically have not agreed are coming to agreement. And then we find where there are other places where we disagree, well, now we’ll look in that direction. And I think we’re going to make, in the next few years, a lot of progress in understanding where these differences have arisen.

This graph shows a comparison between the value of H0, or the expansion rate today, as derived from Hubble space telescope Cepheids and anchors as well as other subsamples of JWST Cepheids (or other types of stars) and anchors. A comparison to Planck, which uses the early relic method instead of the distance ladder method, is also shown.

Credit: A.G. Riess et al., Astrophysical Journal submitted, arXiv:2408.11770, 2024

ES: All right. My last question, then, is: when you take a look at the data from Planck and say, okay, they gave us a Hubble constant of 67.4 ± 0.5. Do you think there is a reasonable path for late-time distance measurements of the Hubble constant to, when we get all of the dream data that you talked about and have done all of the calibrations properly, do you think there is a very reasonable chance that we will find that this late-time Hubble constant is, in fact, below 70 km/s/Mpc, and close to in agreement with the Planck data? Or do you think that looks extraordinarily unlikely based on the data we have so far today?

WF: Again, this is an empirical question. I don’t know how it will come out in the end. I certainly think it’s plausible that it could come out at slightly below 70. That doesn’t strike me as implausible at all. I’m impressed by the fact that right now, you know, it’s not just Planck, but there is also the South Pole Telescope. There’s also ACT: the Atacama Cosmology Telescope. Those are very different calibrations, very different wavelength ranges — all-sky, not all-sky, deeper, whatever — they’re really agreeing well.

We need to do that locally, too. Right? We’re still seeing much larger spreads, which is another indication of the uncertainties in the local distance scale. So, you know, we want to get to that point. Where different measurements are in better agreement on the local distance scales, too. Just the kinds of things that I’m describing, the types of changes that we’ve seen, even very, very recently, that getting below 70 does not seem out of the range of possibility. But it could go the other way. You know, we could move more, and it could be in better agreement with there being a Hubble tension. I think we just don’t know yet. Measure it. That’s where we are.

Wendy Freedman is the John & Marion Sullivan University Professor of Astronomy and Astrophysics at the University of Chicago. In 2025, Time magazine listed her as one of the world’s 100 most influential people.

Sign up for the Starts With a Bang newsletter

Travel the universe with Dr. Ethan Siegel as he answers the biggest questions of all.



Source link

Related articles

Attor – Attorney & Lawyer Website Figma Template

LIVE PREVIEWBUY FOR $14 Attor – Attorney & Lawyer Website Figma Template is absolutely suitable for law, law firms, law offices, lawyers, attorneys, legal practitioners, barristers, and solicitors. The standard information sections will help...

Tesla CEO Elon Musk hits back at drug use claims, calls publications ‘hypocrites’

Tesla CEO Elon Musk has responded to a report from the Wall Street Journal and the New York Times, as both publications claimed he was abusing drugs while being involved with President Trump on both...

A Researcher’s Guide to: Microgravity Materials Research

June 2025 Edition Most materials are formed from a partially or totally fluid sample, and the transport of heat and mass from the fluid into the solid during solidification inherently influences the formation of...

Flixena – Video Website HTML 5 Template for Movie Streaming and Film TV

LIVE PREVIEWBUY FOR $17 Modern Video Website HTML5 Template The video website HTML5 template is a highly useful tool for website development. If you aim to start a live video streaming service, you will most likely...
[mwai_chat model="gpt-4"]
Exit mobile version