This interview appears in print in PALLADIUM 08: Scientific Authority. To receive your copy of Palladium’s quarterly print edition, subscribe here.
After earning his PhD in physics in 1991 at the age most earn their bachelor’s degree, Stephen Hsu went to work on the Superconducting Super Collider for Harvard. After teaching theoretical physics at Yale and the University of Oregon, he served as the Vice President for Research and Graduate Studies at
Michigan State University from 2012 until the summer of 2020.
Hsu became interested in genomics a little over 10 years ago when he learned that the cost of genome sequencing was following a Moore’s Law curve—it was getting cheaper at an exponential rate. He realized that if we stayed on that curve there would eventually be millions of genomes available as data to solve the ultimate problem in biology: given the DNA of an individual, can we predict his characteristics?
Bringing his background in theoretical physics to the necessary AI and machine learning models, in 2017 Hsu and his research group published a paper that used genomic predictors to successfully determine an individual’s height within one inch. Hsu’s research group then used that same method to predict the likelihood of diseases like diabetes, asthma, hypothyroidism, and various sorts of cancers.
This, as well as his experience as a tech founder, taught him that technological inflection points are moments of great opportunity—but also moments of great debate. Hsu sat down with Palladium to talk about the role that politics, rigorous thinking, and uncertainty play in science.
Steve, lately we’ve been thinking about the interaction of politics and science. You’ve mentioned that a colleague of yours once sought funding for better climate models, but it had to be done without implying that current models had any problems. How does politics sometimes constrain the scientific process?
This individual is one of the most highly decorated, well-known climate simulators in the world. To give you his history, he did a PhD in general relativity in the UK and then decided he wanted to do something else, because he realized that even though general relativity was interesting, he didn’t feel like he was going to have a lot of impact on society. So he got involved in meteorology and climate modeling and became one of the most well known climate modelers in the world in terms of prizes and commendations. He’s been a co-author on all the IPCC reports going back multiple decades. So he’s a very well-known guy. But he was one of the authors of a paper in which he made the point that climate models are still far from perfect.
To do a really good job, you need to have a small basic cell size, which captures the size of the features being modeled inside the simulation. The best size is actually scaled down quite a bit because of all kinds of nonlinear phenomena: turbulence, convection, transport of heat, moisture, and everything that goes into the making of weather and climate.
And so he made this point that we’re nowhere near actually being able to properly simulate the physics of these very important features. It turns out that the transport of water vapor, which is related to the formation of clouds, is important. And it turns out high clouds reflect sunlight, and have the opposite sign effect on climate change compared to low clouds, which trap infrared radiation. So whether moisture in the atmosphere or additional carbon in the atmosphere causes more high cloud formation versus more low cloud formation is incredibly important, and it carries the whole day in these models.
In no way are these microphysics of cloud formation being modeled right now. And anybody who knows anything knows this. And the people who really understand physics and do climate modeling know this.
So he wrote a paper saying that governments are going to spend billions, maybe trillions of dollars on policy changes or geothermal engineering. If you’re trying to fix the climate change problem, can you at least spend a billion dollars on the supercomputers that we would need to really do a more definitive job forecasting climate change?
And so that paper he wrote was controversial because people in the community maybe knew he was right, but they didn’t want him talking about this. But as a scientist, I fully support what he’s trying to do. It’s intellectually honest. He’s asking for resources to be spent where they really will make a difference, not in some completely speculative area where we’re not quite sure what the consequences will be. This is clearly going to improve climate modeling and is clearly necessary to do accurate climate modeling. But the anecdote gives you a sense of how fraught science is when there are large scale social consequences. There are polarized interest groups interacting with science.
Why was that controversial? Was it just because of the amount of money involved? Or, as you mentioned earlier, was it that it raised questions for the accuracy of the current consensus? How do these scientific questions take on political implications, even among people that are relatively aligned?
It was controversial because, in a way, he was airing some well known dirty laundry that all the experts knew about. But many of them would say it’s better to hide laundry for the greater good, because a bad guy—somebody who’s very anti-CO2 emissions reduction— could seize on this guy’s article and say “Look, the leading guy in your field says that you can’t actually do the simulations he wants, and yet you’re trying to shove some very precise policy goal down my throat. This guy’s revealing those numbers have literally no basis.” That would be an extreme version of the counter-utilization of my colleague’s work.
Now, I had long conversations with him. He said, “Well, I’m pretty damn sure the models are kind of directionally right.” And, you know, I’m not an expert in climate science, but I am an expert in evaluating the uncertainty of mathematical models and simulation models—so I asked him well, Professor X, why are you confident that you’re directionally right?
He couldn’t really give me a very good answer, at least by the standards of most rational physicists. He sort of just said, “Well, look, I’ve worked with this, I’ve worked in this field a long time and have a feel for it. I’m pretty sure we are getting it right, even though it maybe can’t be rigorously justified.” And so he wasn’t able to take that part of his experience and knowledge and transfer it to me so that I had an equally high confidence level.
Usually, what I hear from people who are a bit more rationalist, skeptical, and rigorous is that yeah—maybe these models are not right. Maybe, because of the high cloud versus low cloud thing, they’re even directionally wrong. But what are the odds of that? Maybe there’s a 20 percent chance they’re wrong and an 80 percent chance they’re right. And we need to insure against the tail risk that we’ll have uncontrolled runaway heating of the Earth’s environment.
And so to me, the highest level of discussion is more in terms of probability distributions—I believe this with a certain confidence level, I should insure against that tail risk. It’s worth spending a trillion dollars a year in the global economy to insure against that tail risk.
I think that’s a defensible argument that people can make. But that’s obviously way too sophisticated for governments that couldn’t even figure out how to do cost-benefit analysis in the face of COVID-19. You can’t just can’t engage in that highest level of discussion if it’s a contested political landscape.
This comes back to the idea that a scientific field can have dirty laundry. How does it even get into that state? You seem to be saying it’s because the potential for misinterpretation in some of these fields is such that the field has to stop having public nuance because of its high social impact. That’s not even politics coming into the science, it’s just that the impact of the science necessitates this change in how the science gets done internally.
It’s usually the case that when you’re doing frontier science, where really not everything is understood, there’s a tremendous amount of uncertainty. You have measurements but the experimentalist might have made a mistake, or there might have been an error in the data processing of the NASA satellite, or whatever. Everything is uncertain: Joe’s mathematical theorem will prove some results, or he might have made a subtle mistake. You need a lot of eyes on it to decide. Is this right? Do we trust this information enough to incorporate it into our model of nature at high confidence? Or is it still one of these things we’re not quite sure about?
So among the real experts trying to grapple with uncertainty, there’s always dirty laundry in the sense that when scientists come out in public, they want to project absolute confidence so they can convince the senator to do what they want, or the billionaire to make the donation. They can’t express uncertainty because if they do, the billionaire has other things to do with his money. “Oh, well, if you’re unsure about climate, I guess I can just build homeless shelters! My economist friend tells me there’s a great ROI on homeless shelters, they’re awesome.”
So the incentives are not quite right. The incentives in the academy are to find truth, and that’s a messy business. It’s got to be messy, people have to be able to clash. You cannot point a finger at the guy clashing with you and say, “Oh, you think the systematic error in my model is twice as big as I said it was. So you must be a climate denier!”
In my lifetime, the way science is conducted has changed radically, because now it’s accepted—particularly by younger scientists—that we are allowed to make ad hominem attacks on people based on what could be their entirely sincere scientific belief. That was not acceptable 20 or 30 years ago. If you walked into a department, even if it had something to do with the environment or human genetics or something like that, people were allowed to have their contrary opinion as long as the arguments they made were rational and supported by data. There was not a sense that you’re allowed to impute bad moral character to somebody based on some analytical argument that they’re making. It was not socially acceptable to do that. Now people are in danger of losing their jobs.
So how do you think that happened?
I don’t want to make a comment about deep history because science itself is really not that old. I mean, how long could you really say that we had a well-functioning kind of scientific infrastructure?
But certainly the change has happened during my professional lifetime. When I started as an assistant professor, the whole atmosphere on campus was different than it is now. And like every complex phenomenon, it’s multifactorial. I could list a bunch of factors that I think contributed, and one of them is that scientists are under a lot of pressure to get money to fund their labs and pay their graduate students. If you sense that NSF or NIH have a view on something, it’s best not to fight city hall. It’s like fighting the Fed—you’re going to lose. So that enforces a certain kind of conformism.
When I started in science, most hard sciences and even the “softer” social sciences, maybe even biology and psychology, were predominantly male. And the whole cultural setting was quite different. Males are much more comfortable with confrontation. Like you and I could have a huge argument and then go out for a beer later. Right? And we could be great colleagues, even though we disagree on some really fundamental issues in our discipline.
Now, this is just my observation, but as the sex composition in these fields started changing, a lot of women found themselves uncomfortable with what could be considered a toxic environment, one where you and I could really go at it.
For example, if I was giving a seminar and you just started confronting me and it got really heated. A lot of women would just be turned off by that. They might even say “I was interested in physics, but those guys are a bunch of jerks. I couldn’t pursue a physics career—I’m glad I’m in data science now.” So we’re really talking about a cultural change that happened with a change of gender makeup in the departments.
I think there’s a male idea that you can be a part of the honorable opposition. You can hold a view that is totally against what you’re supposed to think, but you’re backing it up with real data and real rational arguments. When I entered science, so as long as that was the case, opposition was okay.
But when your department has a different cultural value—collegiality, gentleness, and nurturing behavior— certain kinds of arguments that used to even happen in front of the students are now just not supposed to happen. And when it comes to scientific issues—less so climate modeling but more like nature versus nurture, for example—the argument is often advanced that you’re harming students by showing them scientific theories or evidence that is harmful to their self image. And therefore you just shouldn’t do it. It’s beyond the pale. You could be fired for harming your students.
So safety is above everything, including the truth. It’s been a gradual change, and I have to admit I’m a little blindsided by it. I didn’t realize it had happened to the extent that it happened until a few years ago. And I think you’ll see a very strong gradient where younger academics are much more prone to conformism or being uncomfortable with intellectual confrontation, much more so than the older faculty.
This change in sex composition is downstream of other cultural changes. How do you think those changes have affected the relationship between science and the rest of society?
Now, those changes in culture are not all bad. Having more collegiality might make some subset of people comfortable enough to help to participate in a particular area of science. And therefore we enlarge the pool of human capital that we can draw on.
One can debate very strongly which of these stories you believe in, but there have been times when I thought, “Oh, these guys are really toxic, it would be kind of good if they calm down and we could be just a little more polite to each other when having a discussion.”
So I see the value of that as well. As far as how science relates to the outside world, here’s the problem: for some people, when science agrees with their cherished political belief, they say “Hey, you know what? This is the Vulcan Science Academy, man. These guys know what they’re doing. They debated it, they looked at all the evidence, that’s a peer-reviewed paper, my friend—it was reviewed by peers. They’re real scientists.” When they like the results, they’re going to say that.
When they don’t like it, they say, “Oh, come on, those guys know they have to come to that conclusion or they’re going to lose their NIH grant. These scientists are paid a lot of money now and they’re just feathering their own nests, man. They don’t care about the truth. And by the way, papers in this field don’t replicate. Apparently, if you do a study where you look back at the most prominent papers over the last 10 years, and you check to see whether subsequent papers which were better powered, had better technology, and more sample size actually replicated, the replication rate was like 50 percent. So, you can throw half the papers that are published in top journals in the trash.”
We can have a very polarized discussion about what is the real role of science in informing day-to-day political debate.
One place where you might see this play out is in the actual university administration departments. You worked for a time as a vice president of a university—
[Laughs] Right. Eight years.
So was the collegial discipline very different there compared to among researchers?
Well, administrators are a different group. The top level administrators at universities are usually drawn from the faculty, or from faculty at other universities. After being a top level administrator at a Big 10 university, and meeting provosts and presidents at the other top universities, I have a pretty good feel for this particular collection of people.
You can imagine what it is that makes someone who’s already a tenured professor in biochemistry decide they want to take on this huge amount of responsibility and maybe even shut down their own research program. They are very, very careerist people. And that is a huge problem, because incentives are heavily misaligned.
The incentive for me as a senior administrator is not to make waves and keep everything kind of calm. Calm down the crazy professor who’s doing stuff, assuage the students that are protesting, make the donors happy, make the board of trustees happy. I found that the people who were in the role so they could advance their career, versus those trying to advance the interests of the institution, were very different. There were times when I felt like I had to do something very dangerous for me career-wise, but it was absolutely essential for the mission of the university. I had to do that repeatedly.
And I told the president who hired me, “I don’t know how long I’m going to last in this job, because I’m going to do the right thing. If I do the right thing and I’m bounced out, that’s fine. I don’t care.” But most people are not like that.
In economics, there’s something called the principal-agent problem. Let’s say you hire a CEO to manage your company. Unless his compensation is completely determined by some long-dated stock options or something, his interests are not aligned with the long-term growth for your company. He can have a great quarter by shipping all your manufacturing off to China, have a great few quarters, and get a huge bonus. Even if, on a long timescale, it’s really bad for your bottom line.
So there’s a principal-agent problem here. Anytime you give centralized power to somebody, you have to be sure that their incentives—or their personal integrity—are aligned with what you want them to promote at the institution. And generally, it’s not well done in the universities right now.
It’s not like it used to be that, “Oh, if Joe or Jane is going to become university president, you can bet that their highest value is higher education and truth, that’s the American way.” It was probably never true. But they don’t claw back your compensation as a president of the university if it later turns out that you really screwed something up. You know, they don’t really even do that with CEOs.
So did you get bounced out?
A little bit, a little bit. I was the most senior administrator who reviewed all the tenure and promotion cases. We have 50,000 students here. It’s one of the biggest universities in the United States. Each year, there are about 150 faculty who are coming up for promotion from associate professor to full professor or assistant to associate with tenure. And there are sometimes situations where you know what the system wants you to do with a particular person, but there’s a question of your personal integrity—whether you want to actually uphold the standards of the institution in those circumstances.
It’s funny, because the president who hired me actually wanted me to do that. She wanted someone who was very rigorous to control this process. But I knew I was gradually making enemies. Sometimes there’s a popular person, and maybe there’s some diversity goal or gender equality goal. So you have this person maybe who hasn’t done that well with their research, or hasn’t been well-funded with external grants, or maybe their teaching evaluations aren’t that great, but some people really want them promoted. And if you impose the regular standard and they don’t get promoted, you’ve made a lot of enemies.
So if I just thought to myself, “I’m not going to be at Michigan State 10 years from now—let them let them handle the problems if all these people who are not so good get promoted. Let them deal with it,” that would be the smart thing if I were a careerist or self-interested person. Don’t make waves, just put your finger in the wind and say: “Which way is the wind blowing? I’ll just go with that.” But I didn’t do that. Because I thought, “What’s the point of doing this job if you’re not going to do it right?” Now imagine how many congressmen are doing this, imagine how many have really deeply held principles that they’re trying to advance. Maybe it’s 10 percent? I don’t know, But it’s nowhere near 100 percent.
It’s the same in higher ed. There’s something called the College Learning Assessment. It’s a standardized test that was developed over the last 20 years. And it’s supposed to evaluate the skills that were learned by students during college. For less prestigious directional state universities this would be a very good tool, because the subset of graduates who did well on the CLA could get hired by General Motors or whatever with the same confidence as they were able to hire the kid from Harvard, University of Michigan, or anywhere else. So there was interest in building something like the CLA.
In order not to do it in a vacuum, the people who were developing it went to all these big corporations and said “Well, what are the skills that you really want out of a college graduate?” And not surprisingly, they wanted things like being able to read an article in The Economist and write a good summary. Or to look at graphs and make some inferences. Nothing ivory tower—it was all very reasonable, practical stuff. And so they commissioned this huge study by RAND. Twenty universities participated, including MIT, Michigan, some historically black colleges, some directional state universities—a huge spectrum covering all of American higher education.
They found that leaving students’ CLA score was very highly correlated to their incoming SAT score. Well, if you knew anything about psychometrics, it’s no surprise that the delta between your freshman year and your senior year on the CLA score is minimal. So what are kids buying when they go to college for four years? Are they getting skills that GM or McKinsey want, or are they just repackaging themselves?
I showed the results of this Rand CLA study to my colleagues, the senior administrators at Michigan State University, and I tried to get them to understand: “Guys, do you realize that maybe we’re not doing what we think we’re doing on this campus? You probably go out and tell alums and donors, moms and dads that we’re building skills for these kids at Michigan State, so they can be great employees of Ford Motor Company and Anderson Consulting when they get out. But the data doesn’t actually say that we do that.” I’m not talking about specialist majors like accounting or engineering, where we can see the kids are coming out with skills they didn’t enter with. I’m talking about generalist learning and “critical thinking” that schools say they teach, but the CLA says otherwise.
I have all my emails from when I was in that job, so I can tell you exactly how much intellectual curiosity and updating of priors there was among these vice presidents and higher at major Big 10 universities. Now, they could have come back and said, “Steve, I don’t believe this RAND study. My son Johnny learned a lot when he was at Illinois,” or something. They could have come back and contested the findings. Did any of them contest the findings with me? Zero.
Did any of them care about what was revealed about the business that we’re actually in, about what is actually going on our campus? One or two well-meaning VPs emailed me saying “Wow, that’s incredible. I never would have thought…” One of the women who emailed me back had a college-aged kid, and this actually impacted some decisions that were going on in her family at the time.
But there was overall very little concern about the findings, there was very little pushback even denying the findings. Those are the people running your institutions of higher education. I discussed these findings with lots of other top administrators at other universities and very few people care. They’ve got their career, they’re just doing their thing.
You’ve worked in genomics research, where you’ve blogged about interesting questions like the possibility of intelligence enhancement with things like embryo selection. How does that interact with today’s modern ideological environment and political considerations?
Well, there are people who are really trying to either kill or at least studiously ignore all of this progress in genomics. One of the consequences that I’ve talked about is the specific data that you need to, for example, build up a genomic predictor where you could take the DNA of a person and predict some aspect of that person.
My research group solved height as a phenotype. Give us the DNA of an individual with no other information other than that this person lived in a decent environment—wasn’t starved as a child or anything like that—and we can predict that person’s height with a standard error of a few centimeters. Just from the DNA. That’s a tour de force.
Then you might say, “Well, gee, I heard that in twin studies, the correlation between twins in IQ is almost as high as their correlation in height. I read it in some book in my psychology class 20 years ago before the textbooks were rewritten. Why can’t you guys predict someone’s IQ score based on their DNA alone?”
Well, according to all the mathematical modeling and simulations we’ve done, we need somewhat more training data to build the machine learning algorithms to do that. But it’s not impossible. In fact, we predicted that if you have about a million genomes and the cognitive scores of those million people, you could build a predictor with a standard error of plus or minus 10 IQ points. So you can ask, “Well, since you guys showed you could do it for height, and since there are 30, or 40, or 50, different disease conditions that we now have decent genetic predictors for, why isn’t there one for IQ?”
Well, the answer is there’s zero funding. There’s no NIH, NSF, or any agency that would take on a proposal saying, “Give me X million dollars to genotype these people, and also measure their cognitive ability or get them to report their SAT scores to me.” Zero funding for that. And some people get very, very aggressive upon learning that you’re interested in that kind of thing, and will start calling you a racist, or they’ll start attacking you. And I’m not making this up, because it actually happened to me.
What could be a more interesting question? Wow, the human brain—that’s what differentiates us from the rest of the animal species on this planet. Well, to what extent is brain development controlled by DNA? Wouldn’t it be amazing if you could actually predict individual variation in intelligence from DNA just as we can with height now? Shouldn’t that be a high priority for scientific discovery? Isn’t this important for aging, because so many people undergo cognitive decline as they age? There are many, many reasons why this subject should be studied. But there’s effectively zero funding for it.
So this brings us to a larger question, which is the relationship between science, scientific authority, political authority, ideology, power, and funding. In your view, what makes for the healthiest relationship between science and politics?
There’s something scientists at universities tell each other that I think is actually true: why do we insist that some kid who’s going to go on to be an investment banker or an NBA talent scout take some science when they’re in college?
The answer is so they can understand a little bit about how science works, and how science and technology have impacted our civilization over time. Then, when they are writing an editorial for the newspaper, or voting, or whatever, they can have at least a little bit more realistic view of science. They’ll know that okay, maybe some scientists are motivated by politics and they’re overclaiming results.
Sometimes, maybe the science isn’t quite right. But on average, it does progress mostly in the right direction, and it does develop really powerful technological tools to help society. So I think the most mature place you can end up is where the people who wield real power in society know something about science. Scientists don’t really wield power. I think a healthy society would be one where the people who have the political power have a healthy respect for and understanding of science, including how messy it can be on frontier issues. If you can’t find anybody who disagrees on either side of an issue, maybe there’s something wrong with that field of science. Maybe the people who disagree are being forced to pay such a huge penalty that they stopped saying anything. And I think that should make you less confident of the claims coming out of that sector of science.
One of the things that I teach in the tech startups that I’m involved in is that you never want the point answer without being given an uncertainty range. For example, if I ask about how many units we are going to sell next quarter, and the guy says, “My model says five million,” an additional estimate of uncertainty, together with the central point estimate, has enormous value. It’s a 2x if he says, “Well, it could be five million, but 95 percent confidence is anywhere between one and nine million.”
Then you realize—okay, so I’m not going to build that strongly into my model. I’m not going to tell our board members tomorrow that we’re definitely going to sell five million units. So that extra second thing, this point estimate and the uncertainty, is already a huge innovation over the way people normally communicate.
You guys are probably too young to remember Robert Rubin. He was Secretary of the Treasury under Clinton. Anyway, he wrote a famous memoir about his years in the White House and at Goldman, where he had been a mergers and acquisitions trader for a long time before going into government.
One of the things that shocked people who read the book was that he talked about probabilistic estimates. “Yeah, in our White House we were so much more sophisticated than those guys over on Capitol Hill, because we didn’t just say it was black and white. We would say, ‘Well, Mr. President, high conviction, it’s black. Or I think it’s white, Mr. President, but low conviction.’”
And you know, anybody who’s done mergers and acquisitions at Goldman knows yeah, you’ve got to talk that way. The word conviction is used all the time in finance: what’s your conviction? If you tell them “high conviction” and it’s wrong, they’re going to fire you.
So he brought that to the White House. And what was funny was that anybody who came from the science world would say, “Well of course if I wrote my physics lab report and I didn’t put the error estimate in there, I’d get an F.” But at the time, the book had an enormous impact. People thought he totally revolutionized strategy at the White House because he introduced the concept of conviction.
People normally associate technocracy with making the world more black and white. But it seems like in that instance, it’s actually permitting nuance.
Well, a lot of mergers and acquisitions that these Goldman guys do are event-driven. “Okay, I think this is going to happen to oil, and then we’re going to do this deal, right?” And so they always have to use this conviction variable because they can’t just throw out predictions for a year from now willy-nilly. They have to plant their flag in certain places. If they’re wrong in those high-conviction things, they’ve got to pay a price.
Do you think there’s something about politics in particular that makes certainty, authoritative statements, and commitments dominant over uncertainty and nuance? Is there some countervailing cultural or political force that makes probabilistic thinking hard to adopt?
I think if I’m working at Goldman, and you’re paying me a million bucks a year, and you’re not pinning me down and making me say what my conviction is, then if I’m wrong later I can say, “Yeah, Joe, that was low conviction. Don’t hold me to that.” If you don’t force me to give you my conviction levels, I prefer not to because you have less grounds to dismiss me when I turn out to be a loser.
So no one’s going to do it for free, they’ve got to be forced to do it. Do our voters have enough attention span that, a year or two later, they’ll remember “Hey, what did he say when he was campaigning?” Or “When the Secretary of Defense told us the Afghan government would last easily over a year, was it high conviction or low conviction when he told us that?” So of course, if I’m the SecDef, I’d rather get to say “Oh, I was getting out of the office and you know, they didn’t ask me whether it was high or low conviction on my way out.” So when it turns out that it’s only four weeks instead of a year, I won’t get in trouble because I will just say “Well, we didn’t really know.” People are going to resist meritocracy and keeping a careful score.