The last year has been characterized by a rush of new artificial intelligence (AI) programs being released into the world since OpenAI, a lab backed by Microsoft, launched ChatGPT in November 2022. Both Microsoft and Google rolled out products in March that they say will use AI to transform work, and IBM’s CEO Arvind Krishna said the company’s AI tool will be able to reduce 30 to 50% of repetitive office work.

Since taking the helm at Microsoft in 2014, at a time when its market dominance with traditional software offerings was waning, Satya Nadella has focused on ensuring the company remains relevant. . The company has invested heavily in Azure, its cloud computing platform, and in AI, pouring at least $13 billion in the leading lab OpenAI. Microsoft’s share price has risen nearly tenfold since Nadella became CEO, outperforming the S&P 500, which has merely doubled its value over the same time.

Now, Nadella is using those investments to reenergize Microsoft’s traditional Office suite of products like Word, Outlook, and Excel, which are now called Microsoft 365. In March, Microsoft launched ‘Copilot,’ an AI tool that it says will free people from the drudgery of work by helping to draft emails and white papers, transcribe and summarize meetings, and teach people how to make sense of data in Excel. Copilot was initially released to a small group of enterprise customers and Microsoft is now rolling out the system to a larger group of customers.

Given some of the headlines about AI and its potential uses, from virtual companions to fitness coaches, improvements to office software like Microsoft Word might not sound that exciting, but it may be one of the use cases that has the most impact on many of our lives. Nadella says that making the way we work better will help us as to thrive as individuals and as a society.

TIME spoke with Nadella about Microsoft’s principles around AI, how the technology could transform work, and what safeguards should look like.

This interview has been condensed and edited for clarity.

You’ve said that AI is going to unleash a new wave of productivity and remove the drudgery from our daily jobs. What specifically would you say is going to change in the workplace with the adoption of AI?

AI itself is very much present in our lives. But if anything, it’s moving from being autopilot to being a copilot that helps us at our work. You put the human in the center, and then create this tool around them so that it empowers them.

It’s not just about productivity, it’s actually taking the drudgery away from work. If you think about all of us at work, how much time do we spend in expressing ourselves and creating? This is what gives us real joy. How much time do we spend just coordinating? And so, if we can tilt that balance towards more creativity, I think we will all be better off.

There are some concerns that this would displace jobs. What responsibility does Microsoft have to address these concerns about job displacement? And what’s it doing in that field?

One of the things that I’m most excited about is how [AI] democratizes access to new skills. I mean, to give you a concrete example, developers who are using GitHub Copilot are 50-odd percent more productive, staying more in the flow. We have around 100 million professional developers, we think the world probably can get to a billion professional developers. That will be a massive increase in total developers, because the barriers to being a software developer are going to come down. This doesn’t mean the great software developers won’t remain great software developers but the ability for more people to enter the field will increase.

Is that Microsoft’s responsibility, to make sure that people who are displaced can develop these new skills?

That absolutely is. In some sense, it’s even good for our business. Our mission is to empower every person and every organization on the planet to achieve more. So to me that is a great way to create better-paying jobs, more empowering jobs—jobs that give people more meaning.

What’s your biggest concern about the adoption of AI going forward?

The one thing that I find very, very good about the way the dialogue is happening: it’s not just about tech optimism. It’s about thinking about technology and its opportunities, but also the responsibilities of the tech industry and the broader unintended consequences and how we mitigate them long before they become, sort of, out there in society. So that I think is the right way in 2023. To have both those dialogues simultaneously shows a level of I’ll call it maturity both in our industry and in our civic society.

That’s why even when we think about AI, perhaps the biggest investment we make isn’t in responsibly thinking about AI. It’s not even just principles in the abstract, but in the engineering process, even the design choices we’ve started, which is putting humans in the center. It’s a design choice.

There’s dialogue and then there’s regulation. If you’re a government, what do you think you would be doing to ensure that there’s enough regulation to protect your citizens from AI?

Already, there is. If you think about what Microsoft did, prior to generative AI and all these Copilots—take what Microsoft did with neural voice. There are no laws yet but we ourselves have put a lot of governance on how and who can use neural voice. I do think there is a place for dialogue, and there is also a place for us to take responsibility as purveyors of this technology before regulation, and then expect that there will be regulation. But at the end of the day, I think we will all be judged by one and one thing alone, which is, do the benefits far outweigh anything that are the societal consequences.

TIME has reported that Microsoft is lobbying against proposals in Europe to regulate general purpose AI. Why is Microsoft getting involved in this argument in particular?

I’m not particularly familiar with that particular comment on what we may or may not be doing In Europe. But the fundamental thing, I think, is, at the end of the day, the European Union and their regulators will do what is best for Europe and we will participate in Europe within the frameworks of the law and regulation. What any regulator or any government or any society should really do is to get that right balance between what are the benefits of this technology, and what are the unintended consequences that they want to mitigate. We will be happy to dialogue on that and make sure that the first happens and the second doesn’t happen.

What is the unintended consequence that you would say regulators should be very wary to mitigate?

I mean, here and now for example, take bias, right? One of the key things is to ensure that when you’re using these technologies, that by some unintended way, biased outputs are not causing real world harm. We have to think about the provenance of data. What are we doing to de-bias these models? This is where Microsoft’s done a lot of work, whether it’s in the pre-training phase, or even after you deploy a model.

Would you agree to any limits on use of AI for military applications?

I think we’ve always said that we want to make sure that the best technology that Microsoft has is available to the very institutions that protect our freedom.

I’m sure you saw the open letter that called on leading AI labs to stop training in new AI for six months, and in TIME, there’s an op-ed calling on labs to shut down AI completely.. What’s your response to those calls that maybe we should slow down and put on the brakes a little?

I think there are two sets of things that are important for us to have robust discussions about. The first one is here and now, how are the real world consequences of any AI being deployed?

And then there is a second part—which I think is also worthwhile talking about—is how do we make sure that any intelligence system we create is in control and aligned with human values?

We have to come back with what are practical ways for us to approach the benefits of these solutions and mitigate the unintended consequences. But ultimately it’s for the regulators and the governments involved to make these decisions.

What about this idea that perhaps the developers behind AI don’t even quite understand the results that AI is generating? Do you agree with that idea that you don’t even know what’s going to happen?

I fall in the camp where I think we shouldn’t abdicate too soon our own responsibility. It’s a most stochastic complex system. There are many stochastic complex systems we deal with. We characterize these stochastic complex systems using lots of other evaluation tests and make sure that they’re safely deployed. So it’s not the first time we are dealing with complexity in the real world.

What’s another example of one such system?

Biology; environment? There’s many things that we observe, we try to get empirical results. We try to deal with it in such a way that we really get the benefits of what we’re doing. I feel like we are quick to talk about this as the last frontier and the only technology.

I think it’s an absolutely amazing set of technologies that are coming. They do show scaling effects. We should be grounded in what is happening, we should be able to characterize them, safely deploy them. All I want us to do as Microsoft is do the hard work. Do the hard work of making sure that the technology and its benefits far outweigh any unintended consequence.

Well, if you think about biology, for example, that’s something that exists in the world that we’re exploring and trying to understand. Whereas AI is something that we ourselves created, and maybe that’s why there’s so much fear around it. In that lens that AI is something that we are doing, should we perhaps be a little bit more cautious than with other systems like biology?

I think about what was the real genesis of the entire computer industry, as Vannevar Bush wrote in As We May Think. The computer industry was about creating tools for the human mind, so that we can do more, understand more of the natural world—whether it’s the climate or biology. So, I feel that us creating technologies that allow us as humans to be able to increase our knowledge, do science, help the human condition—is what is the core to enlightenment. And so therefore trying to say, well, “now is the time to stop”—that doesn’t seem the right approach.

There does seem to be this urgency to make sure we’re using AI to the best of our abilities. What is driving that urgency? Is it shareholders, the research community, is it executives at Microsoft? Who do you think’s deciding that it’s really urgent to try to use AI to the best of our abilities right now?

The world’s economic growth has, in my book, kind of stalled. The last time in fact, the economic growth could be attributed to even information technology—the last time it showed up in productivity stats—was when PCs became ubiquitous in the workplace.

So if we really have a goal that everybody in the world should have economic growth and it should be climate positive, and there should be trust in society around it. We need to build new technology that achieves both those goals. So that’s why I think AI is exciting.

That doesn’t mean the unintended consequences of AI are not going to be there—whether it’s labor displacement, or whether it’s safe deployment and bias—so we’ve got to take care of those. But let’s not confuse ourselves that we need new technology to help with the economic growth that we enjoyed in the early parts of the 20th century. What if we can have that type of economic growth? This time around, though, it’s much more even—not just in the West Coast to the United States [but] everywhere in the world—small businesses, large businesses, public sector and private sector. That’s a beautiful world that I aspire towards.

If we’re deciding that we’re embracing economic growth, your argument is we should also decide that we’re embracing AI?

When I think about economic growth, it’s about being able to really go back to the ideals of enlightenment, which is about human wellbeing and thriving. Economic growth is what has helped us have the most number of people in the world enjoy better living standards. And so, to me, that’s the goal and in that context, economic growth plays a role, and in that context, technology plays a role.

More Must-Reads From TIME


Contact us at [email protected].

Alana Semuels

Source link

You May Also Like

Trump, Cleveland, and the History of Nonconsecutive White House Terms

Donald Trump, the 45th President of the United States (2017-2021), announced on…

Burisma’s Devon Archer met with then-Secretary of State Kerry just weeks before Shokin was fired

FIRST ON FOX: Hunter Biden’s former business partner and fellow Burisma board member,…

Arkansas family tries to navigate wave of anti-trans legislation

Missouri on Wednesday became the 20th state to pass a law aimed…

“They cannot survive without fear”: A heretic on leaving the evangelical church

“I was leaving the Garden, the evangelical church, and the only version…