ReportWire

Tag: key distinction

  • The Real Difference Between Trump and Biden

    The Real Difference Between Trump and Biden

    [ad_1]

    Listen to this article

    Produced by ElevenLabs and NOA, News Over Audio, using AI narration.

    Americans likely face a choice this fall between two men they don’t want for president. Or they can stay home and get one of the two guys they don’t want for president anyway. The reasons for voter disdain are clear enough: Poll respondents say Joe Biden is too old, an impression reinforced by last week’s special-counsel report, and they have always been troubled by Donald Trump’s judgment and character (though a majority think he’s too old too.)

    Voters have genuine questions about both men. But we’ve seen each occupy the presidency. One thing the two administrations have made clear is that whereas Biden follows an approach to governance that seems to offset some of his weaknesses, Trump’s preferred managerial style seems to amplify his.

    Many people treat elections as a chance to vote a single individual into office; as a result, they tend to focus disproportionately on the personality, character, and temperament of the people running. But voters are also choosing a platform—a set of policies as well as a set of people, chosen by the president, who will shape and implement them. The president is the conductor of an orchestra, not a solo artist. As the past eight years have made very clear, the difference in governance between a Trump administration and a Biden administration is not subtle—for example, on foreign policy, border security, and economics—and voters have plenty of evidence on which to base their decision.

    But for the sake of argument, let’s consider the potential effects of Biden’s failures of memory and Trump’s … well, it’s a little tough to say what exactly is going on with Trump’s mental state. The former president has always had a penchant for saying strange things and acting impulsively, and it’s hard to know whether recent lapses are indications of new troubles or the same deficits that have long been present. His always-dark rhetoric has become more apocalyptic and vengeance-focused, and he frequently seems forgetful or confused about basic facts.

    To what extent would either of their struggles be material in a future presidential term? One key distinction is that Biden and Trump have fundamentally different conceptions of the presidency as an office. Biden’s approach to governance has been more or less in keeping with the traditions of recent decades. Biden’s Cabinet and West Wing are (for better or worse) stocked with longtime political and policy hands who have extensive experience in government. Cabinet secretaries largely run their departments through normal channels. Policy proposals are usually formulated by subject-area experts. The president’s job is to sit atop this apparatus and set broad direction.

    Biden doesn’t always defer to experts, and he has clashed with and overruled advisers on some topics, including, notably, the U.S. withdrawal from Afghanistan. Such occasional clashes are fairly typical—as long as they’re occasional. As my colleague Graeme Wood wrote this week, “The presidency is an endless series of judgment calls, not a four-year math test. In fact, large parts of the executive branch exist, in effect, to do the math problems on the president’s behalf, then present to him all those tough judgment calls with the calculations already factored in.”

    This doesn’t mean that Biden’s readily apparent aging doesn’t bring risks. The presidency requires a great deal of energy, and crises can happen at all hours and on top of one another, testing the stamina of any person. The oldest president before Biden, Ronald Reagan, struggled with acuity in his second term, an administration that produced a huge, appalling scandal of which he claimed to be unaware.

    In contrast to the model of the president as the ultimate decision maker, Trump has approached the presidency less like a Fortune 500 CEO and more like the sole proprietor of a small business. (Though he boasts about his experience running a business empire, the Trump Organization also ran this way—it is a company with a large bottom line but with concentrated and insular management by corporate standards.) As president, Trump had a tendency to micromanage details—the launching system for a new aircraft carrier, the paint scheme on Air Force One—while evincing little interest in major policy questions, such as a long-promised replacement for Obamacare.

    At times, Trump has described his role in practically messianic terms: “I alone can fix it,” he infamously said at the 2016 Republican National Convention. He has claimed to be the world’s foremost expert on a wide variety of subjects, and he often disregarded the views of policy experts in his administration, complaining that they tried to talk him out of ideas (when they didn’t just obstruct him). He and his allies have embarked on a major campaign to ensure that staffers in a second Trump administration would be picked for their ideological and personal loyalty to him. Axios has reported that the speechwriter Stephen Miller could be the next attorney general, even though Miller is not an attorney.

    Perhaps as a result of these different approaches to the job, people who have served under the men have divergent views on them. Whereas Biden can seem bumbling and mild in public, aides’ accounts of his private demeanor depict an engaged, incisive, and sometimes hot-tempered president. That’s also the view that emerges from my colleague Franklin Foer’s book The Last Politician. “He has a kind of mantra: ‘You can never give me too much detail,’” National Security Adviser Jake Sullivan has said. “The most difficult part about a meeting with President Biden is preparing for it, because he is sharp, intensely probing, and detail-oriented and focused,” Homeland Security Secretary Alejandro Mayorkas said last weekend. (As Jon Stewart noted on Monday night, the public might be more convinced were these moments videotaped, like the gaffes.)

    Former Trump aides are not so complimentary. Former White House Chief of Staff John Kelly called Trump “a person that has nothing but contempt for our democratic institutions, our Constitution, and the rule of law,” adding, “God help us.” Former Attorney General Bill Barr said that he “shouldn’t be anywhere near the Oval Office.” Former Defense Secretary Mark Esper described him as “unfit for office.” Of 44 former Cabinet members queried by NBC, only four said they supported Trump’s return to office. Even allowing for the puffery of politics, the contrast is dramatic.

    None of this is to say that Biden’s memory lapses aren’t worth concern or that he is as vigorous as he was as a younger man. But someone voting for Biden is selecting, above all, a set of policy ideas and promises that he has laid out, with the expectation that the apparatus of the executive branch will implement them.

    Voting for Trump is opting for a charismatic individual who brings to office a set of attitudes rather than a platform. Considering the presidency as a matter of individual mental acuity grants the field to Trump’s own preferred conception of unified personal power, so it’s striking that the comparison makes the dangers posed by Trump’s mentality so stark.

    [ad_2]

    David A. Graham

    Source link

  • Effective Altruism’s Philosopher King Just Wants to Be Practical

    Effective Altruism’s Philosopher King Just Wants to Be Practical

    [ad_1]

    Academic philosophers these days do not tend to be the subjects of overwhelming attention in the national media. The Oxford professor William MacAskill is a notable exception. In the month and a half since the publication of his provocative new book, What We Owe the Future, he has been profiled or excerpted or reviewed or interviewed in just about every major American publication.

    MacAskill is a leader of the effective-altruism, or EA, movement, whose adherents use evidence and reason to figure out how to do as much good in the world as possible. His book takes that fairly intuitive-sounding project in a somewhat less intuitive direction, arguing for an idea called “longtermism,” the view that members of future generations—we’re talking unimaginably distant descendants, not just your grandchildren or great-grandchildren—deserve the same moral consideration as people living in the present. The idea is predicated on brute arithmetic: Assuming humanity does not drive itself to premature extinction, future people will vastly outnumber present people, and so, the thinking goes, we ought to be spending a lot more time and energy looking out for their interests than we currently do. In practice, longtermists argue, this means prioritizing a set of existential threats that the average person doesn’t spend all that much time fretting about. At the top of the list: runaway artificial intelligence, bioengineered pandemics, nuclear holocaust.

    Whatever you think of longtermism or EA, they are fast gaining currency—both literally and figuratively. A movement once confined to university-seminar tables and niche online forums now has tens of billions of dollars behind it. This year, it fielded its first major political candidate in the U.S. Earlier this month, I spoke with MacAskill about the logic of longtermism and EA, and the future of the movement more broadly.

    Our conversation has been edited for length and clarity.


    Jacob Stern: Effective altruists have been focused on pandemics since long before COVID. Are there ways that EA efforts helped with the COVID pandemic? If not, why not?

    William MacAskill: EAs, like many people in public health, were particularly early in terms of warning about the pandemic. There were some things that were helpful early, even if they didn’t change the outcome completely. 1Day Sooner is an EA-funded organization that got set up to advocate for human-challenge trials. And if governments had been more flexible and responsive, that could have led to vaccines being rolled out months earlier, I think. It would have meant you could get evidence of efficacy and safety much faster.

    There is an organization called microCOVID that quantifies what your risk is of getting COVID from various sorts of activities you might do. You hang out with someone at a bar: What’s your chance of getting COVID? It would actually provide estimates of that, which was great and I think widely used. Our World in Data—which is kind of EA-adjacent—provided a leading source of data over the course of the pandemic. One thing I think I should say, though, is it makes me wish that we’d done way more on pandemics earlier. You know, these are all pretty minor in the grand scheme of things. I think EA did very well at identifying this as a threat, as a major issue we should care about, but I don’t think I can necessarily point to enormous advances.

    Stern: What are the lessons EA has taken from the pandemic?

    MacAskill: One lesson is that even extremely ambitious public-health plans won’t necessarily suffice, at least for future pandemics, especially if one was a deliberate pandemic, from an engineered virus. Omicron infected roughly a quarter of Americans within 100 days. And there’s just not really a feasible path whereby you design, develop, and produce a vaccine and vaccinate everybody within 100 days. So what should we do for future pandemics?

    Early detection becomes absolutely crucial. What you can do is monitor wastewater at many, many sites around the world, and you screen the wastewater for all potential pathogens. We’re particularly worried about engineered pathogens: If we get a COVID-19-scale pandemic once every hundred years or so from natural origins, that chance increases dramatically given advances in bioengineering. You can take viruses and upgrade them in terms of their destructive properties so they can become more infectious or more lethal. It’s known as gain-of-function research. If this is happening all around the world, then you just should expect lab leaks quite regularly. There’s also the even more worrying phenomenon of bioweapons. It’s really a scary thing.

    In terms of labs, possibly we want to slow down or not even allow certain sorts of gain-of-function research. Minimally, what we could do is ask labs to have regulations such that there’s third-party liability insurance. So if I buy a car, I have to buy such insurance. If I hit someone, that means I’m insured for their health, because that’s an externality of driving a car. In labs, if you leak, you should have to pay for the costs. There’s no way you actually can insure against billions dead, but you could have some very high cap at least, and it would disincentivize unnecessary and dangerous research, while not disincentivizing necessary research, because then if it’s so important, you should be willing to pay the cost.

    Another thing I’m excited about is low-wavelength UV lighting. It’s a form of lighting that basically can sterilize a room safe for humans. It needs more research to confirm safety and efficacy and certainly to get the cost down; we want it at like a dollar a bulb. So then you could install it as part of building codes. Potentially no one ever gets a cold again. You eradicate most respiratory infections as well as the next pandemic.

    Stern: Shifting out of pandemic gear, I was wondering whether there are major lobbying efforts under way to persuade billionaires to convert to EA, given that the potential payoff of persuading someone like Jeff Bezos to donate some significant part of his fortune is just massive.

    MacAskill: I do a bunch of this. I’ve spoken at the Giving Pledge annual retreat, and I do a bunch of other speaking. It’s been pretty successful overall, insofar as there are other people kind of coming in—not on the size of Sam Bankman-Fried or Dustin Moskovitz and Cari Tuna, but there’s definitely further interest, and it is something I’ll kind of keep trying to do. Another organization is Longview Philanthropy, which has done a lot of advising for new philanthropists to get them more involved and interested in EA ideas.

    I have not ever successfully spoken with Jeff Bezos, but I would certainly take the opportunity. It has seemed to me like his giving so far is relatively small scale. It’s not clear to me how EA-motivated it is. But it would certainly be worth having a conversation with him.

    Stern: Another thing I was wondering about is the issue of abortion. On the surface at least, longtermism seems like it would commit you to—or at least point you in the direction of—an anti-abortion stance. But I know that you don’t see things that way. So I would love to hear how you think through that.

    MacAskill: Yes, I’m pro-choice. I don’t think government should interfere in women’s reproductive rights. The key distinction is when pro-life advocates say they are concerned about the unborn, they are saying that, at conception or shortly afterwards, the fetus becomes a person. And so what you’re doing when you have an abortion is morally equivalent or very similar to killing a newborn infant. From my perspective, what you’re doing when having an early-term abortion is much closer to choosing not to conceive. And I certainly don’t think that the government should be going around forcing people to conceive, and then certainly they shouldn’t be forcing people to not have an abortion. There is a second thought of Well, don’t you say it’s good to have more people, at least if they have sufficiently good lives? And there I say yes, but the right way of achieving morally valuable goals is not, again, by restricting people’s rights.

    Stern: I think there are at least three separate questions here. The first being this one that you just addressed: Is it right for a government to restrict abortion? The second being, on an individual level, if you’re a person thinking of having an abortion, is that choice ethical? And the third being, are you operating from the premise that unborn fetuses are a constituency in the same way that future people are a constituency?

    MacAskill: Yes and no on the last thing. In What We Owe the Future, I do argue for this view that I still find kind of intuitive: It can be good to have a new person in existence if their life is sufficiently good. Instrumentally, I think it’s important for the world to not have this dip in population that standard projections suggest. But then there’s nothing special about the unborn fetus.

    On the individual level, having kids and bringing them up well can be a good way to live, a good way of making the world better. I think there are many ways of making the world better. You can also donate. You can also change your career. Obviously, I don’t want to belittle having an abortion, because it’s often a heart-wrenching decision, but from a moral perspective I think it’s much closer to failing to conceive that month, rather than the pro-life view, which is it’s more like killing a child that’s born.

    Stern: What you’re saying on some level makes total sense but is also something that I think your average pro-choice American would totally reject.

    MacAskill: It’s tough, because I think it’s mainly a matter of rhetoric and association. Because the average pro-choice American is also probably concerned about climate change. That involves concern for how our actions will impact generations of as-yet-unborn people. And so the key difference is the pro-life person wants to extend the franchise just a little bit to the 10 million unborn fetuses that are around at the moment. I want to extend the franchise to all future people! It’s a very different move.

    Stern: How do you think about trying to balance the moral rigor or correctness of your philosophy with the goal of actually getting the most people to subscribe and produce the most good in the world? Once you start down the logical path of effective altruism, it’s hard to figure out where to stop, how to justify not going full Peter Singer and giving almost all your money away. So how do you get people to a place where they feel comfortable going halfway or a quarter of the way?

    MacAskill: I think it’s tough because I don’t think there’s a privileged stopping point, philosophically. At least not until you’re at the point where you’re really doing almost everything you can. So with Giving What We Can, for example, we chose 10 percent as a target for what portion of people’s income they could give away. In a sense it’s a totally arbitrary number. Why not 9 percent or 11 percent? It does have the benefit of 10 percent being a round number. And it also is the right level, I think, where if you get people to give 1 percent, they’re probably giving that amount anyway. Whereas 10 percent, I think, is achievable yet at the same time really is a difference compared to what they otherwise would have been doing.

    That, I think, is just going to be true more generally. We try to have a culture that is accepting and supportive of these kinds of intermediate levels of sacrifice or commitment. It is something that people within EA struggle with, including myself. It’s kind of funny: People will often beat themselves up for not doing enough good, even though other people never beat other people up for not doing enough good. EA is really accepting that this stuff is hard, and we’re all human and we’re not superhuman moral saints.

    Stern: Which I guess is what worries or scares people about it. The idea that once I start thinking this way, how do I not end up beating myself up for not doing more? So I think where a lot of people end up, in light of that, is deciding that what’s easiest is just not thinking about any of it so they don’t feel bad.

    MacAskill: Yeah. And that’s a real shame. I don’t know. It bugs me a bit. It’s just a general issue of people when confronted with a moral idea. It’s like, Hey, you should become vegetarian. People are like, Oh, I should care about animals? What about if you had to kill an animal in order to live? Would you do that? What about eating sugar that is bleached with bone? You’re a hypocrite! Somehow people feel like unless you’re doing the most extreme version of your views, then it’s not justified. Look, it’s better to be a vegetarian than to not be a vegetarian. Let’s accept that things are on a spectrum.

    On the podcast I was just on, I was just like, ‘Look, these are all philosophical issues. This is irrelevant to the practical questions.’ It’s funny that I am finding myself saying that more and more.

    Stern: On what grounds, EA-wise, did you justify spending an hour on the phone with me?

    MacAskill: I think the media is important! Getting the ideas out there is important. If more people hear about the ideas, some people are inspired, and they get off their seat and start doing stuff, that’s a huge impact. If I spend one hour talking to you, you write an article, and that leads to one person switching their career, well, that’s one hour turned into 80,000 hours—seems like a pretty good trade.

    [ad_2]

    Jacob Stern

    Source link