ReportWire

Tag: a.i

  • Forget the Turing Test. AI needs to pass the Summer Camp Test before it can take over the world

    Forget the Turing Test. AI needs to pass the Summer Camp Test before it can take over the world

    [ad_1]

    As I type this, just one browser tab over is a menacing spreadsheet. Impossibly long, it’s crammed with numbers and notes. I’m dreading returning to it–and wondering if I have the resolve to untangle the logic and probability problems within.

    I’m a senior advisor for artificial intelligence (AI) at Mozilla and VP of AI and machine learning at Workday. But this spreadsheet has nothing to do with my day jobs, or even computer science. I’m doing something a bit more difficult: Signing my three kids under 10 up for summer camp.

    It’s an incredibly complicated, convoluted, and time-consuming process. Parents often need to begin six months in advance–when we’re just getting our first snow storms here in Boston. And even then, it’s challenging: Earlier today I was placed in a 47-minute digital queue just to access a registration website. So why don’t I simply outsource this to an AI assistant?

    I can’t. And that should tell you something about the hype you hear about AI–especially the consumer-facing variety.

    About a year ago, when ChatGPT launched, AI came close to passing the Turing Test, the famous thought experiment devised by English mathematician Alan Turing in 1950. If AI could converse in a manner indistinguishable from a human, Turing said, it would truly be “intelligent.”

    Not long after this milestone came the hype. Tech leaders sounded off not only on AI’s unlimited potential but also its existential danger. Now that we have intelligent machines among us, they argued, we are just a few lines of code from utopia–or dystopia.

    In reality, that’s not the case.

    Tools like ChatGPT and the large language models (LLMs) that power them are an impressive feat of computer science. They can be incredibly useful, too. But all-powerful? Just ask any harried parent trying to get a head start on summer camp registration. 

    As many parents know, figuring out a schedule for the eight weeks that school is out is an odyssey. You need to find the right programs, at the right times, in the right places, at the right price. And those are just the basic logistics. Then come the deeper questions: Where are the kids’ friends going? Is the camp’s vibe right? Is admission competitive? Can we carpool? How much sunblock is required?

    Just last week, Boston Globe correspondent Kara Baskin detailed this challenge perfectly in her column titled “Parents, prepare for battle: A memo from your favorite cutthroat Boston summer camps.”

    Right now, this odyssey can’t be outsourced to the AI assistants on the market. It still takes a human being to navigate the quantitative and qualitative complexities of summertime extracurriculars. Even Sissie Hsiao, Google VP and General Manager for Google Assistant and Bard, has lamented AI’s inability to solve the complications of summer camp registration.

    That’s lesson number one: AI isn’t about to take over the world; it can’t even solve summer camp. So take AI futurist doomsday hysteria with a grain of salt. Let’s worry when AI passes the Summer Camp Test, not the Turing Test.

    Often, AI hype claims the tech will level the playing field, eliminating disparities that have long plagued society. Yet AI assistants are being tailored for the people who need them least: professionals ensconced in the corporate realm.

    Growing up, my mom–who had limited English, limited tech literacy, and a job that paid less than minimum wage–could have really benefited from an AI assistant when navigating things like summer camp registration. She didn’t have 47 minutes to wait in a digital queue. But tools like ChatGPT still aren’t advanced enough to untangle the actual, hard problems for people with less means and access.

    The Summer Camp Test hints at what we need more of in AI: Systems built to solve real problems, from the mundane (like summer camp logistics) to the game-changing (like novel pharmaceutical research). What we don’t need? More hype about omnipotent AI.

    Kathy Pham is a computer scientist, senior advisor at Mozilla, VP of AI and Machine Learning at Workday, and a visiting lecturer at Harvard Business School. Opinions here are not representative of any employers, and only of her most critical role as a parent.

    More must-read commentary published by Fortune:

    The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

    Subscribe to the new Fortune CEO Weekly Europe newsletter to get corner office insights on the biggest business stories in Europe. Sign up for free.

    [ad_2]

    Kathy Pham

    Source link

  • Five Big Tech companies with combined market value of over $10 trillion to report earnings this week

    Five Big Tech companies with combined market value of over $10 trillion to report earnings this week

    [ad_1]

    Investors wondering where the S&P 500 is headed, at least for the next month or so, will want to pay attention to three key days this week.

    Between Tuesday and Thursday, five Big Tech companies with a combined market value of more than $10 trillion will report earnings: Microsoft Corp., Alphabet Inc., Meta Platforms Inc., Amazon.com Inc. and Apple Inc. Meanwhile, the Federal Reserve will issue its decision on interest rates, followed by Chair Jerome Powell’s press conference where he’s expected to discuss the outlook ahead.

    The stakes couldn’t be much higher, with the S&P 500 Index pushing deeper into record territory on bets that central bankers are poised to began easing monetary policies and tech behemoths like Microsoft getting more valuable by the day.

    “Tech disproportionately moved the market last year and big tech continues to have the biggest earnings power, so the results will be crucial for the markets,” said Chris Zaccarelli, chief investment officer at Independent Advisor Alliance.

    After a shaky start to the year, the S&P 500 is rising again and on pace for a third monthly advance that’s added more than 18% since late October, when the index hit a near-term low before Fed officials started signaling that rate hikes were over. 

    The rally is again being led by megacaps including Microsoft, Alphabet, Amazon.com, Nvidia and Meta Platforms, which were responsible for a majority of the index’s 24% gain last year as investors became captivated by the possibilities of artificial intelligence services. The so-called Magnificent Seven, which also includes Tesla Inc., just hit a record 29% of the S&P 500 despite a slump in shares of the electric-vehicle maker that’s erased more than $200 billion in market value just this month.

    AI Booming

    Microsoft and Alphabet will kick off earnings on Tuesday after markets close. The two companies are among the best positioned to benefit from the AI boom after investing heavily in the field for years. Microsoft has been adding the features to its suite of software products, and investors are betting that AI will soon start boosting profit and sales growth.

    On Wednesday, the focus shifts to the end of the Fed’s January meeting, where it’s expected to hold interest rates steady for a fourth-consecutive meeting. Traders will be primarily focused on what Powell and other policymakers have to say about the timing of easing. Recent data showing inflation continuing to recede and resilient US economic growth suggest central bankers won’t be in a hurry to cut interest rates.

    Apple is the biggest draw on Thursday, when Amazon and Facebook-owner Meta Platforms also report in the afternoon. The iPhone maker has been dogged by concerns about revenue growth and is expected to report its first sales expansion in four quarters.

    Read more: Apple veteran instrumental to iPhone development leaves for electric-vehicle maker Rivian: ‘Now is the time for me to move on’

    With most of the megacaps in record territory, there are concerns that investors are over exposed to just a handful of stocks, which could open the door for some pain if quarterly results underwhelm.

    The Magnificent Seven stocks were again named the most crowded trade in a Bank of America survey of fund managers, according to a research note published by the bank last week.

    No Protection

    Still, traders aren’t rushing to scoop up hedges against declines, according to options market data.

    A gauge of projected price swings in Apple in the next three months is hovering near the lowest level in six years. Traders expect a 3.3% move in the stock in either direction a day after the results, which would be among the narrowest post-earnings swings in two years.

    Projected three-month volatility in Meta Platforms, which more than quadrupled since its November 2022 nadir, is at the lowest in two years. The cost of protection against a 10% decline in Microsoft in the next month is hovering near the lowest level since August relative to the cost of options that profit from a similar rally.

    Tesla demonstrated the risks last week after missing fourth-quarter earnings estimates and warning that its sales growth would be “notably lower” in 2024. The stock tumbled 12% the following day, its biggest drop in a year.

    Microsoft recently overtook Apple as the world’s most valuable company with a market value above $3 trillion. The rally has made the stock even more expensive, at 33 times profits projected over the next 12 months compared with an average of 24 times over the past decade.

    To Jason Benowitz, senior portfolio manager at CI Roosevelt, there’s no doubt the megacap trade is crowded. But that doesn’t mean the stocks can’t continue to rally with economic growth slowing and easing financial conditions.

    “There’s a good reason for the crowded trade,” he said. “The environment is good for them.”

    Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

    [ad_2]

    Jeran Wittenstein, Elena Popina, Bloomberg

    Source link

  • India speaks over 100 languages. Microsoft wants AI to bridge its linguistic gaps

    India speaks over 100 languages. Microsoft wants AI to bridge its linguistic gaps

    [ad_1]

    Depending on how you count, India has at least 120 languages, and another 1,300 “mother tongues,” an Indian term that refers to local dialects. The country’s government recognizes 22 languages but primarily operates in just two: Hindi, mostly spoken in India’s north, and English. That excludes tens of thousands of Indians who speak neither.

    Enter Microsoft’s AI for Good initiative—the tech giant’s umbrella program that tries to use AI to solve problems in health, environmental protection, and human development. The U.S. company has used India to pilot several novel uses of the new technology, such as an app that uses AI to tell farmers the best time to sow seeds or a model that uses satellite images to forecast how a natural disaster might hurt a vulnerable population. 

    But Microsoft and its AI researchers are particularly interested in navigating India’s linguistic challenges, hoping it might unlock breakthroughs elsewhere. “India’s complexity makes it a test bed for multilingual settings everywhere,” says Ahmed Mazhari, Asia president for Microsoft. “If you can solve and build for India, then you can solve and build for the world.”

    Small languages and large language models

    The Jugalbandi chatbot, which Microsoft debuted in May 2023, is one of AI for Good’s flagship projects. The chatbot is targeted to rural farmers—specifically those who live in areas that don’t speak India’s more popular languages—who want to learn about or access public services, such as applying for a scholarship.

    Jugalbandi uses a large language model, developed with local research lab AI4Bharat, to parse a query, uncover the relevant information, then generate an easy-to-understand answer in the user’s local tongue. (Currently, Jugalbandi can translate 10 of India’s 22 official languages.)

    (Fortune earlier featured Microsoft’s work with AI and Jugalbandi on its 2023’s “Change the World” list.)

    Another Microsoft initiative called VeLLM, or “Universal Empowerment with Large Language Models,” aims to improve how GPT, the OpenAI-developed model that underpins ChatGPT, works when using less-popular languages. Most of today’s large language models work best in a handful of major global languages—primarily English and Chinese—because so much data are in those two languages. It’s harder to train AI on so-called low-resource languages, where data is scarce or non-existent.

    VeLLM is the foundation for other experiments with AI, like Shiksha, a generative AI bot that helps teachers create new curricula in non-English languages quickly, freeing up more to be spent on teaching. 

    ‘Participatory’ design

    Microsoft engineers like Kalika Bali, principal researcher for Microsoft Research India, are wary of cutesy technology solutions that don’t reflect how rural Indians live their lives.

    Technologists have long tried to use the South Asian country as a testing ground to prove that digital technologies—cheap laptops, affordable internet, and smartphone apps—can improve quality of life in rural India.

    Yet not every initiative was a success, Bali notes dryly. She remembers one project in which designers from a development organization tried to create a game to help women farmers in India access important information.

    “The women gave that person such a disdainful look,” she said. “They said ‘Do you think we have time for playing games?’”

    Instead, Bali says she and her team pursue a “participatory” design process. “We spend a lot of time with the communities that we are working for, trying to have them say what they want out of a technology, or how they want to solve a problem,” she says. 

    Not just social good

    Microsoft, of course, isn’t just interested in AI for its potential for social good. The U.S. tech giant is developing its own AI products, hosted on its Azure cloud computing system. It’s also a key backer of ChatGPT developer OpenAI. The hype around AI has helped lift Microsoft’s stock by 65% over the past year, pushing its market value to $3 trillion, making it the U.S.’s most valuable company. 

    Mahzhari sees a lot of opportunity for Microsoft in Asia, where there is “an incredible pace of change and transformation across industries and geographies.” He points to several examples where Asian companies have turned to Microsoft’s generative AI services: Lazada, the Southeast Asian e-commerce platform owned by Alibaba, used Microsoft tools to create the first e-commerce chatbot in Southeast Asia. 

    Still, even if Microsoft’s experiments in India don’t do much for the company’s bottom line directly, they provide important lessons for the company going forward.

    “Our partnerships under AI for Good and other pilot initiatives enable us to pick up early signals for advancing AI security and safety,” Mahzhari says. Those lessons are then used to develop “policies for much-needed guardrails” on the new technology. 

    Bali knows that you can’t separate her work from Microsoft’s overall business interest in AI. 

    “These are early forays in terms of how to make people who do not have access to technology get on the technology wagon,” she says. “Then they will become, hopefully, future technology users who would, amongst other things, also use Microsoft products.”

    Fortune is hosting the inaugural Fortune Innovation Forum in Hong Kong on March 27-28. Experts, investors, and leaders of the world’s largest companies will come together to discuss “New Strategies for Growth,” or how companies can best seize opportunities in a fast-changing world.

    [ad_2]

    Nicholas Gordon

    Source link

  • Meet the AI-powered robots that could change the multibillion-dollar window-cleaning industry forever

    Meet the AI-powered robots that could change the multibillion-dollar window-cleaning industry forever

    [ad_1]

    In May, I set foot on the highest residential terrace in the world, the outdoor space of the penthouse at the Central Park Tower in New York City. A part of the so-called Billionaires’ Row, Central Park Tower is among the tallest buildings in the world, and its exterior is primarily glass. As I sipped a glass of Dom Pérignon standing 1,416 feet above the ground, alfresco, I looked back at the building and wondered: How do you clean these windows?

    Someone has to wash the windows and polish the facade. For the super tall skyscrapers of the world, that someone is sometimes a robot.

    Two companies, Verobotics and Skyline Robotics, are looking at making the estimated $40 billion window-washing industry safer and more efficient by using AI-driven robots to mimic the work of human window washers while also scanning the surface for necessary maintenance information for other facade-focused issues. By utilizing AI along with hardware, both companies are betting the vertical living of our urban environments is going to need a lot of TLC. 

    “The exterior of a building acts as its protective skin, a vital component that affects not only the well-being of its occupants but also the building’s energy efficiency,” said Ido Genosar, cofounder and chief executive officer of Verobotics. “Surprisingly, building exterior upkeep has seen little innovation in the past century, despite becoming more expensive. Our goal is to proactively address issues in how we maintain and safeguard the ‘skin’ of our cities’ rising skyscrapers.”

    Genosar’s robot, dubbed Ibex after the wild goat known for its impressive climbing skills, looks similar to iRobot’s automated home vacuum Roomba but functions entirely differently. To start, Ibex suctions itself to the sides of buildings using two legs, five cameras, and 15 sensors so it can autonomously navigate a facade. A special payload attached to the robot is deployed for cleaning, using custom-built brushes. It’s all controlled by one operator, who can monitor up to four robots on a single building simultaneously. 

    While the robot cleans, it uses the raw data from the multiple sensors to build a real-time, 3D map of the building, including the state of the infrastructure. Genosar explained that when used regularly over the course of a year, the model updates with exterior changes and can highlight potential problems while they are small and manageable versus causing a bigger problem down the road.

    “Using AI to gather methodological and consistent data on the building-facade health enables building owners to dramatically improve their proactive maintenance work and plan ahead for needed repairs based on severity,” Genosar said. “That reduces costs associated with maintaining a building’s facade.”

    Genosar explained that he and his cofounder, Itay Levitan, who serves as chief technology officer, developed the robotic system after Genosar saw the limitations of current building maintenance options through his family’s construction business. Old solutions were bulky, heavy, and expensive, and beyond that, weren’t prepared for the changing climate and the resulting shift in weather patterns for many urban areas. Nor were they prepared for the heights of the world’s newest builds: 60% of the tallest towers were completed in the past 13 years.

    The increased frequency of extreme weather events, such as typhoons and heavy rainstorms, has an effect on buildings, Genosar said. It’s more dangerous for window washers to clean amid these conditions—in terms of wind, heat, and sun—and it’s more likely to affect the overall structures. Much of his work, he said, is focused on Hong Kong, Singapore, Seoul, and Tokyo, well-known for their clusters of skyscrapers.

    Ibex scaling a building.
    Ibex, named for the gravity-defying goat, is an AI-powered robot from Verobotics that can scale the sides of a building like Spider-Man.

    Courtesy of Verobotics

    In New York and London, Skyline Robotics is working to rethink facade maintenance for the U.S. and Europe. The company just launched in London in October, but you can already see its robots on Manhattan towers such as 10 Hudson Yards and 7 World Trade Center.

    Skyline’s robot, called Ozmo, has mechanical arms structured to mimic human movements and can clean windows three times as fast. The Ozmo also reduces on-site labor costs by 75%, according to Ross Blum, Skyline’s president and chief operating officer.

    More important, said Blum, Skyline robots are helping to solve the labor shortage problem in the window-washing industry. It’s a risky job to hang hundreds of feet in the air and wipe down glass, so it’s not surprising that only 9% of window washers are between 20 and 30 years old. The vast majority are over 40 years old and likely to retire sooner than later.

    “They have to hang 1,000-plus feet in the air, doing manual labor, and it can be 40 Fahrenheit or 110 degrees outside, but no matter what, they have to get the job done,” Blum said. “The reality is that the work isn’t incredible, and the next generation isn’t showing up.”

    Blum explained that robotics offer younger workers a different incentive: to learn about new technologies and gain a transferable skill set. With Skyline, Blum sees window cleaners and robots working in tandem. “Our technology will never be used on 100% of buildings that exist; there is always room for a human workforce.”

    Ozmo features a six-axis robotic arm with the same joints as a human arm. It’s placed on a table that’s secured to scaffolds suspended by cranes or davit systems from the roof. From far away, you may even think there are two window washers overhead. Instead, the human controllers are on firm footing in the building, where they can monitor from above. 

    Ozmo, however, makes most of its own decisions. It calculates 250 times per second to decide on such variables as how much pressure for cleaning, where to apply the brush, and when to move the scaffolding to the next level—all this to determine the most efficient cleaning path. It also takes stock of the building’s facade conditions and reports back data on potential issues that need to be addressed. 

    Blum explained that the cost savings and added data collected by robots is critical to the health of the world’s tallest buildings. When a building’s owner can implement cleaning, polishing, and inspections more often, they can best map out construction projects and repairs. Today, most facade checks occur on a 10- to 15-year cycle, but Blum believes most real estate owners would like a shorter time frame. 

    “Roughly 10% to 20% of a building’s budget is allocated to the facade of the structure; for a $1 billion building, that’s $200 million,” Blum said. “That’s the look of your asset. You want that checked with greater regularity.”

    New York City, for instance, has more than 7,000 high-rise buildings, with more than 100 soaring higher than 650 feet and 16 buildings taller than 1,000 feet. From my window, I can crane my neck to see the top of a new residential tower that reaches almost 850 feet. It certainly will need cleaning from time to time.

    “If someone is paying for space on the 80th floor of a building, they’re paying for the views from that vantage point,” Blum said. “They deserve clean windows.”

    [ad_2]

    Stephanie Cain

    Source link

  • Sam Altman's OpenAI to be second-most valuable U.S. startup behind Elon Musk's SpaceX based on early-talks funding round

    Sam Altman's OpenAI to be second-most valuable U.S. startup behind Elon Musk's SpaceX based on early-talks funding round

    [ad_1]

    OpenAI is in early discussions to raise a fresh round of funding at a valuation at or above $100 billion, people with knowledge of the matter said, a deal that would cement the ChatGPT maker as one of the world’s most valuable startups.

    Investors potentially involved in the fundraising round have been included in preliminary discussions, according to the people, who asked not to be identified to discuss private matters. Details like the terms, valuation and timing of the funding round haven’t yet been finalized and could still change, the people said.

    If the funding round happens as planned, it would make the artificial intelligence darling the second-most valuable startup in the US, behind only Elon Musk’s Space Exploration Technologies Corp., according to data from CBInsights.

    OpenAI declined to comment.

    The company is set to complete a separate tender offer in early January, which would allow employees to sell their shares at a valuation of $86 billion, Bloomberg previously reported. That is being led by Thrive Capital and saw more demand from investors than there was availability, people familiar with the matter have said.

    OpenAI’s rocketing valuation mirrors the AI frenzy it kicked off one year ago after releasing ChatGPT, a chatbot capable of composing eerily human sentences and even poetry in response to simple prompts. The company became Silicon Valley’s hottest startup, raising $13 billion to date from Microsoft Corp., and spurred a new appreciation for the promise of AI that changed the tech industry landscape within a few months.

    Amazon.com Inc. and Alphabet Inc. have since poured billions into OpenAI-rival AnthropicSalesforce Inc. led an investment into Hugging Face that valued it at $4.5 billion, and Nvidia Corp., which makes many of the semiconductors that power AI tasks, said earlier this month it made more than two dozen investments in 2023.

    OpenAI has also held discussions to raise funding for a new chip venture with Abu Dhabi-based G42, according to people with knowledge of the matter.

    The startup has discussed raising between $8 billion and $10 billion from G42, said one of the people, all of whom requested anonymity to discuss confidential information. It’s unclear whether the chips venture and wider company funding efforts are related.

    OpenAI Chief Executive Officer Sam Altman had been seeking capital for the chipmaking project, code-named Tigris. The goal is to produce semiconductors that can compete with those from Nvidia, which currently dominates the AI chip market, Bloomberg News reported last month.

    In October, G42 announced a partnership with OpenAI “to deliver cutting-edge AI solutions to the UAE and regional markets.” No financial details were provided. The firm, founded in 2018, is led by Sheikh Tahnoon bin Zayed Al Nahyan, the UAE’s national security adviser and chair of the Abu Dhabi Investment Authority.

    OpenAI’s future looked briefly uncertain after its board suddenly fired Altman earlier last month. At the time, some investors considered writing their stakes down to zero. But after five days of leadership tumult, Altman was brought back and a new board was named. The company has aimed to signal to customers that it’s refocusing on its products following the upheaval.

    — With assistance from Hannah Miller

    Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

    [ad_2]

    Gillian Tan, Edward Ludlow, Shirin Ghaffary, Bloomberg

    Source link

  • Sam Altman hints at OpenAI boardroom drama in blog post stating need to fight bureaucracy ‘every time you see it’ and ‘get back up and keep going'

    Sam Altman hints at OpenAI boardroom drama in blog post stating need to fight bureaucracy ‘every time you see it’ and ‘get back up and keep going'

    [ad_1]

    OpenAI CEO Sam Altman has long dispensed advice to Silicon Valley entrepreneurs, having led the startup accelerator Y Combinator before his current role leading what is arguably the world’s most important artificial intelligence company. But with his latest round of advice, given in a post on his personal blog on Thursday, it’s hard not to think about the recent turmoil at OpenAI that saw Altman ousted then quickly reinstated as CEO last month, a boardroom drama that captivated Silicon Valley and much of the business and technology world.

    OpenAI’s nonprofit board abruptly fired Altman last month, giving only vague reasons. The move blindsided among others Microsoft CEO Satya Nadella, whose company has invested billions into the ChatGPT maker. After five tumultuous days in which investors and employees rallied around Altman, it became clear that he would be reinstated as CEO and the board would be revamped.

    Earlier this month, Altman told Trevor Noah on the What Now? podcast about his short-lived but intense ordeal: “I’m still a little bit in shock and a little bit just trying to pick up the pieces. I’m sure as I have time to sit and process this I’ll have a lot more feelings about it.” 

    In the blog post yesterday, some of Altman’s pieces of advice were pretty standard (“Optimism, obsession, self-belief, raw horsepower, and personal connections are how things get started”) while others seemed like they might refer to last month’s chaos—and may be part of the processing that Altman mentioned to Noah.

    Of course, this is only speculation, and Altman may not have been referring to the OpenAI chaos or even had it in mind when writing his post. Fortune has reached out to the company and will update this story with any response. But with knowledge of his ouster, it’s difficult not to read some of the advice without thinking about recent events at the company, including, “Get back up and keep going.”

    In particular, the seventh item reads: “Fight bullshit and bureaucracy every time you see it, and get other people to fight it too. Do not let the org chart get in the way of people working productively together.”

    Altman certainly fought against the decision to fire him as colleagues and investors rallied around him. Two days after his dismissal, Altman posted an image of himself wearing a guest badge in the OpenAI office, writing: “First and last time I ever wear one of these.” And just hours after it, fellow cofounder Greg Brockman resigned in protest from his role as OpenAI president.

    Soon, OpenAI colleagues were repeating the line “OpenAI is nothing without its people” in X posts, with Altman responding with a heart emoji. That emoji became something of a rallying cry among employees in the days that followed.

    Another piece of advice from Altman: “Communicate clearly and concisely.” When they fired him, the board said Altman had not been “consistently candid in his communications,” so Altman might be referring here to his own communication to the board. Or reading it another way, perhaps he’s taking a dig at the board over their bureaucratic-sounding explanation of why they fired him in the first place.

    Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

    [ad_2]

    Steve Mollman

    Source link

  • Design leaders are viewing their profession with a bit more humility: 'Not that many businesses are so fluid that they need constant reinvention'

    Design leaders are viewing their profession with a bit more humility: 'Not that many businesses are so fluid that they need constant reinvention'

    [ad_1]

    Are major companies taking the idea of design seriously? More companies are hiring top designers, with 36 of the top 100 Fortune 500 companies now having a chief design officer, compared to 18 in 2014.

    Yet recent history is littered with new products, redesigns, and other design-forward initiatives that failed to get any traction in the marketplace. And then there’s general ignorance: A recent survey from McKinsey found that only a third of CEOs and their direct reports could confidently state what their designers even do. As Fast Company’s Suzanne Labarre argued in October 2022, design is “no magical solution for transforming companies and conquering competitors.”

    The recognition that design may not offer an easy path to success pushed three design leaders last week at Fortune‘s Brainstorm Design conference in Macau to be more humble about what the practice can do.

    “This sort of disappointment in the design discipline has to do with…the notion that was sold for a solid 20-30 years that design was a process, as opposed to a product or an outcome or a thing you made at the end of the day” said Cliff Kuang, author of User Friendly: How the Hidden Rules of Design Are Changing the Way We Live, Work, and Play. Companies incorrectly hoped that by having a design process, hiring consultants, and then putting “all the people in the right room” would be enough to yield innovation.

    “Not that many businesses are so fluid that they need constant reinvention. Not every business is going to be one that actually needs to introduce new ideas to people on a constant basis,” Kuang said.

    Katrina Alcorn, the former general manager for design at IBM, dismissed the “magical thinking that you can just buy a bunch of designers, put them in a room and magic will happen.”

    “It doesn’t work that way. You have to create the conditions for design success, and that involves the entire company and it usually involves culture change and changing mindsets,” she said.

    Instead, a designer’s strength may be asking questions and connecting the dots, noted Ben Sheppard, partner at McKinsey Design in London.

    “Maybe our role is best supporting actor. Maybe our role is to be the glue working alongside our friends in data and product, in engineering and project management and finance, bringing it together,” he said.

    Yet AI will change what particular skills designers will need to do their work. Kuang said the trove of data that these new technologies can generate mean designers will have to change the way they approach a design challenge.

    “It’s just really hard, right? You just don’t know what the data is going to draw. You can’t know every single instance,” Kuang said. “That notion that you totally control the experience is one that designers are actually having to give up a little.”

    But Alcorn said she didn’t think AI will fundamentally change the role of the designer. “Designers have to be somewhat experts in people, and that’s not going to change. I think actually with AI, if anything, we’re going to have to understand ourselves better than ever,” she said.

    Subscribe to the new Fortune CEO Weekly Europe newsletter to get corner office insights on the biggest business stories in Europe. Sign up for free.

    [ad_2]

    Lionel Lim

    Source link

  • AI emissions are fueling a new doomerism. This time it's climate change

    AI emissions are fueling a new doomerism. This time it's climate change

    [ad_1]

    There is a new doomer narrative over artificial intelligence emerging in the background at this year’s COP meeting. This one isn’t focused on a malignant superintelligence. Instead, it is over sustainability and concerns over AI’s burgeoning energy demands.

    A recent study projects that by 2027, NVIDIA’s new AI servers will be consuming over 85.4 terawatt-hours annually, exceeding the energy usage of countries such as Sweden and Argentina.

    Research from the University of Massachusetts Amherst suggesting that training a single AI model can emit over 284 tonnes of CO2, equivalent to the lifetime emissions of five average American cars, paints a concerning picture of AI’s environmental impact. Annually, AI’s carbon footprint is approaching 1% of global emissions.

    AI’s energy demands have indeed increased dramatically. A Stanford study flags a 300,000-fold rise in AI systems’ power requirements since the early 2010s. And some of this energy is derived from fossil fuels, with data centers globally consuming over 1% of global electricity, a third of which comes from coal and natural gas.

    However, what the doomers miss is the ingenuity of human research and industry. Analyzing IT’s electricity consumption back to the 2000s, Jonathan Koomey and colleagues found that the energy intensity of the global data center industry dropped by around 20% per year between 2010 and 2018. Efficiency gains in data centers, chips, and programming have outstripped the increase in energy use.

    This human factor is what the doomers’ narrative misses, suggesting that while AI’s energy demands are growing, so too are the efficiencies in the systems that support it.

    AI software support

    Innovations in AI also contribute to this trend of increasing efficiency. Techniques like “gradient compression” in AI training, a method being driven forward at my own institution, are reducing the energy required for AI systems to share and process data as they learn, whilst simultaneously speeding up the process.

    AI equipment management

    The impact of AI on energy efficiency extends beyond theoretical research. Google’s AI-driven approach to data center cooling has led to a reduction of about 40% in energy use, equivalent to taking 64,000 cars off the road annually.

    McKinsey’s analysis suggests that AI-enhanced manufacturing could reduce greenhouse gas emissions by 10-20%. Companies like Intel and GE Renewables are harnessing AI for significant CO2 savings.

    AI devices

    In the energy sector, the adoption of “grid edge” AI technologies–everything from smart thermostats to better-managed solar panels–could lead to substantial reductions in utility emissions by 2030.

    Furthermore, AI-powered carbon capture and storage technologies are projected to supercharge scalable and efficient solutions for carbon removal.

    The challenge lies in ensuring that the efficiency gains and emission reductions achieved through AI outpace its own resource consumption. This requires a concerted effort across technology, governance, and collaborative research.

    Industries must focus on developing smarter AI systems powered by renewable energy. Policymakers need to create frameworks that encourage innovation within environmentally responsible boundaries. Academic investments should target the exploration of AI in the realms of climate and clean energy.

    While AI presents significant sustainability challenges, it also offers groundbreaking solutions. With responsible leadership that balances the benefits of AI with its environmental externalities, AI can positively transform systems to accelerate global decarbonization.

    Striking the right balance is essential for AI to usher in an era of sustainable prosperity, moving beyond doomerism to a future where technology and environmental stewardship go hand in hand. The journey towards sustainable AI is not just about technological innovation but also about reimagining our relationship with technology in the context of our planet’s health. As we navigate this path, the decisions we make today will shape the sustainability of our digital future–and hopefully, that is something everyone at COP can agree on.

    Professor Eric Xing is the president of Mohamed bin Zayed University of Artificial Intelligence. Professor Adrian Monck is a senior advisor at MBZUAI.

    More must-read commentary published by Fortune:

    • Bosses thought they won the return-to-office wars by imposing rigid policies. Now they’re facing a wave of legal battles
    • Inside long COVID’s war on the body: Researchers are trying to find out whether the virus has the potential to cause cancer
    • Access to modern stoves could be a game-changer for Africa’s economic development–and help cut the equivalent of the carbon dioxide emitted by the world’s planes and ships
    • Melinda French Gates: ‘It’s time to change the face of power in venture capital’

    The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

    Subscribe to the new Fortune CEO Weekly Europe newsletter to get corner office insights on the biggest business stories in Europe. Sign up for free.

    [ad_2]

    Eric Xing, Adrian Monck

    Source link

  • Box CEO Aaron Levie’s top takeaway from OpenAI meltdown: ‘Don’t have weird corporate structures’

    Box CEO Aaron Levie’s top takeaway from OpenAI meltdown: ‘Don’t have weird corporate structures’

    [ad_1]

    When OpenAI’s board ousted CEO Sam Altman over a reported disagreement, claiming he was “not consistently candid,” it left many onlookers scratching their head. How could such a thing occur to one of the most buzzy startups in Silicon Valley? The surprise firing highlighted the bizarre corporate structure at the $86 billion startup where the nonprofit controls the for-profit subsidiary. This structure has drawn criticism from plenty of tech characters including Box CEO Aaron Levie. From tweets to the stage, Levie doubled down on his stance regarding OpenAI’s unorthodox structure at Fortune‘s Brainstorm AI conference in San Francisco on Monday.

    “If you just look at the ratio of the amount of drama to the amount of takeaways, the ratio is way off,” Levie said on stage. “The main takeaway is, don’t have weird corporate structures. It never ends well.”

    At the heart of the dispute at OpenAI was a reported clash of perspectives on the trajectory of artificial intelligence growth. On one side stood the effective altruist faction, to which former board member Helen Toner subscribed, that worries about a doomsday-like scenario where AI could destroy the world. On the opposing front there are effective accelerationism (e/acc) enthusiasts, believing in AI’s potential to positively transform our world and advocating for an expedited development. It wasn’t that black and white internally, but that seems to be the layman’s gist of the dispute.

    Levie highlighted these two growing factions within Silicon Valley, and while he leans more towards acceleration, he said his biggest takeaway from the philosophies is that we need to “land the plane as an ecosystem on this topic ASAP.” There’s “tens of thousands of products” that rely on OpenAI, giving rise to a community of companies whose own fortunes have become deeply entwined in the success of OpenAI.

    Take Khan Academy founder Salman Khan, who described earlier Monday at Brainstorm AI, how his team had to reach out to “the highest levels of contacts” they had at Microsoft to make sure that they wouldn’t have an interruption of service as a result of the boardroom drama.

    Levie highlights this dependence as a key reason why so much drama was kicked up, with so many figures rallying behind the success of OpenAI and Altman.

    “It was not your classic sort of leadership struggle or dynamic,” Levie said.

    Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

    [ad_2]

    Kylie Robison

    Source link

  • Why tech will remain the economy's biggest growth engine

    Why tech will remain the economy's biggest growth engine

    [ad_1]

    Every year, Fortune publishes the Future 50, a ranking of the world’s largest public companies by their long-term growth prospects, co-developed with Boston Consulting Group (read more on the Future 50 and our methodology). In this series, we assess trends related to the future growth potential of businesses.

    A small fraction of all companies is responsible for the majority of wealth creation in the stock market over the long term: A recent study of 28,000 U.S. firms shows that almost all net shareholder value created between 1926 and 2022 was attributable to only 2% of the sample. 

    Leading the pack in terms of total value generated—over the entirety of the nearly 100 years studied—are digital technology players, specifically, the “MAMAA” companies (Meta, Amazon, Microsoft, Apple, and Alphabet), which now constitute more than a quarter of the value of the entire S&P 500. All five are currently among the 10 most valuable firms worldwide—with Nvidia and Tesla rounding out the stable of tech giants among the top 10. Across the Pacific in China, players like Tencent, Alibaba, and privately held ByteDance lead the valuation rankings.

    Stumbling blocks for the tech sector      

    Recently, however, the growth promise of the technology sector has seemed less certain. In China, the government launched a crackdown on its tech champions and their superstar CEOs, enhancing data privacy measures and increasing its antitrust vigilance. Now, the CCP is putting pressure on digital entertainment players by severely restricting internet usage for minors. In the U.S., increased public scrutiny over the impact of social media (which is alleged to cause depression and contribute to social polarization) is putting pressure on players like Meta, while Amazon finds itself facing a landmark monopoly case

    Rising geopolitical tensions are also affecting tech players—from the Biden administration doubling down on export controls of advanced chip manufacturing equipment to China, to the much-discussed TikTok ban, or the recent calls to halt a partnership between Ford Motor Co. and Chinese battery manufacturer CATL. 

    Finally, there are the layoffs, now totaling over 400,000 workers through 2022 and 2023 (or roughly 4% to 5% of the total US tech sector workforce). While these are partially correcting for pandemic-era over-hiring, they also reflect a shift in investor focus from long-term promises to short-term payoffs, in reaction to increased interest rates that make riskier long-term investments less attractive. Higher rates have also contributed to the current venture capital “winter,” in which deal counts and values have fallen to 2020 levels and startup exits as well as capital raised are at long-time lows.

    Given these significant headwinds, it is no wonder that Fortune’s ranking of the 100 Fastest-Growing Companies is no longer dominated by the tech industry. The top 10 are now firmly rooted in the physical realm, selling building materials or wires, refining steel, manufacturing cars, or drilling for oil. Only 17% of the included players are from the tech industry—roughly the same representation as, say, the energy sector—while the MAMAA companies are nowhere to be found.

    Does this indicate that tech is no longer the growth engine of the economy? Or, as Fortune CEO Alan Murray suggested, will the trend towards dematerialization and digital technologies continue?

    Tech evidence from the Future 50

    A look at the data suggests that technology will remain a key growth engine. The Future 50—an annual ranking, co-developed by Fortune and BCG, which assesses the long-term growth prospects of the world’s largest public companies—continues to be dominated by firms from the IT and communications sectors. Those sectors have consistently captured around half of the top 50 spots since the ranking’s inception in 2017. So far, the promise of growth potential of the Future 50 has consistently borne out, with all annual cohorts outperforming the S&P 500 as well as the S&P 500 Growth indices on revenue growth.

    How can we reconcile the headwinds the tech sector is currently experiencing with its high future potential?

    For one, there is a difference of time scales. In the short term, economic and geopolitical turbulence has created significant stumbling blocks. But in the long term, the invention and proliferation of new technologies will continue to drive improved standards of living, as it has throughout human history—and it will unlock growth and profits for the companies that provide these technological solutions. 

    Moreover, it is worth differentiating between technologies, which are not equal in terms of the growth potential they create—as a closer look at the 2023 Future 50 reveals. 

    On this year’s list, B2B-software providers, which are enabling the AI revolution, achieve particularly strong representation (e.g., cloud firms like No. 1 Snowflake or No. 6 Cloudflare, cybersecurity players like No. 2 Datadog or No. 3 Crowdstrike, and big data analysis firms like No. 18 Palantir). This is consistent with the valuation rally driven by generative AI that several tech giants experienced in 2023. Also well-represented among the Future 50 are cleantech players (e.g., EV manufacturers No. 5 Li Auto and No. 13 NIO, and solar panel as well as battery manufacturers such as No. 10 EVE Energy, No. 12 Sungrow, and No. 17 Suzhou Maxwell), as the global demand for sustainable technologies continues to rise.

    However, merely embracing the most-hyped technologies will not be sufficient for companies to achieve sustainable growth. So, how can companies turn technology into competitive advantage—and how can investors separate the wheat from the chaff? 

    Turning technology into advantage

    It is prudent to recall the AI boom of the 1980s, which was centered on “expert systems” that were meant to emulate human problem-solving in highly specialized domains, following rules defined by experts—for example, identifying compounds based on spectrometer readings. 

    In business, the most famous such system is XCON, deployed at computer manufacturer Digital Equipment Corporation to automatically select components based on customer requirements. It reportedly saved the firm around $25 million per year by reducing errors and enhancing the speed of the assembly process. As a result, corporations around the world began to develop their own expert systems and a hardware industry sprang up around these investments. However, most companies were unable to identify use cases for their systems or found that the costs of upkeep were prohibitive. As such, more than 300 AI companies had shut down or been acquired by 1993, ending the first commercial wave of AI.

    To create value, new technologies need to be embedded into specific applications, and be accompanied by revolutions in operating and business models, which ultimately weave them into the wider social fabric. For example, a spark plug, in isolation, is not a revolutionary technology. Placed in an internal combustion engine that powers a car, that is driven by a person, within a society that has roads, traffic laws, and a culture of automobile use, it reveals its revolutionary potential. 

    Similarly, the MAMAA companies were able to turn the technology of the internet into mammoth valuations only by creating digital platforms and assembling an ecosystem of suppliers, contributors, as well as customers that used and benefited from them. This, in turn, required defining new ways of not just capturing, but sharing value among ecosystem participants, and developing new forms of leadership across the ecosystem that relied not on authority, but cooperation.

    Four prerequisites for advantage

    Our research has identified four prerequisites for how to apply technologies in a way that unlocks advantage—and the Future 50 companies demonstrate how to put them into practice.

    1: Identifying a specific application

    For one, an explicit thesis for how a new technological solution will create value for customers is required. For example, how can the technology be applied to help customers execute existing “jobs to be done” to a higher level of quality, or to make new valuable jobs feasible? 

    Many companies are now exploring the implications of generative AI for their business—with CEOs having major fear of missing out—but many limit themselves to identifying potential efficiency improvements (e.g., enhancing the productivity of software developers). As the technology becomes more widely available, any advantages it enables in terms of operational efficiency will be erased. 

    Recognizing this, Future 50 No. 2 Datadog is not content creating large language models (LLM), but rather developing tools that allow its customers to monitor and optimize how their proprietary models perform. 

    2: Defining a unique approach

    Moreover, companies need to deploy their technology in a way that is difficult to replicate. This is particularly crucial with technologies that are “born commoditized,” like LLMs, many of which are open-source.

    For example, No. 48 Spotify realized that the value proposition of a music streaming service would not only be the ability to instantly access songs listeners already know, but also the ability to discover new artists or albums they may enjoy. It developed “Discover Weekly,” a personalized playlist of recommended new music—predicting songs an individual may find appealing based on data collected from millions of users exploring Spotify’s catalogue. By facilitating customer exploration, Spotify has created a source of competitive advantage that depends on the size of its userbase and the power of its algorithms—which are more difficult for competitors to imitate than the breadth of its catalogue or the reliability and sound quality of its app.

    3: Capturing and sharing the value

    Next, companies need a plan for monetizing their new offerings. For example, building a website in the late 1990s did not automatically translate into increased value generation (though not having a website could be disadvantageous). Recognizing this, companies like Alphabet’s Google are now racing to define how to monetize GenAI tools—as their current main revenue driver, advertising, seems to be less appropriate for use with chatbots than traditional web search.

    In a world dominated by business ecosystems, companies also need to ensure that their approach to value capture does not alienate other participants. For example, No. 9 DoorDash has defined a system in which value is provided to all players on its platform: Buyers gain convenience; merchants and payment providers unlock an additional revenue stream; and Dashers get access to a flexible work model.

    4: Renewing the advantage

    Finally, companies need to be able to renew their competitive advantage when others catch up. The MAMAA companies have all embraced this, evolving substantially over time by embracing new growth engines at critical junctions. Microsoft CEO Satya Nadella, for example, pivoted his firm’s software business from a product to a service model.

    The Future 50 also embody this virtue. No. 14 Snap has long been a pioneer in social media, with competitors like Meta copying several of its features over the years. In an ever more crowded space, Snap keeps exploring new avenues to monetize and expand its userbase: For example, it recently struck a partnership with Amazon Fashion, in which shoppers browsing eyewear products can use Snapchat’s augmented reality features to virtually try on glasses. 

    Similarly, No. 49 CATL, the largest global manufacturer of lithium-ion batteries, has started pivoting to sodium-ion batteries, which rely on more abundant materials and are cheaper to produce. The firm announced it would start mass production this year, with the new technology being included in production cars in China as of Q4.

    The importance of the operating model 

    Technology guru Andrew McAfee posits that underlying the remarkable performance of the Silicon Valley giants is not just that they are at the center of a technological revolution, but also, that they are leading a revolution in how business is done—which he describes as the Geek Way.

    Our analysis confirms that the Future 50 tech players share several cultural and structural characteristics which heighten their grow potential and help them avoid a descent into bureaucracy. They invest heavily in R&D and, as a result, have larger and higher-quality patent portfolios; they have relatively youthful and stable leadership; they have leaner corporate structures; and they have a more pronounced long-term strategic orientation. For the Future 50, we assess that orientation with a natural language processing-based approach, weighing the frequency with which company leadership discusses short-term vs. long-term issues in official filings. 

    ***

    Despite significant short-term headwinds, technology is poised to remain the growth engine of the global economy. However, as AI and other technologies—like cleantech and synthetic biology—are set to change business and the world, investors should remain prudent: Embracing these technologies will not be sufficient for companies to gain an advantage—rather, unlocking sustainable growth will require identifying an application of these technologies that solves a valuable problem, a unique deployment towards this end, a way to capture and share the value that is created, and a capacity for continuous renewal.

    [ad_2]

    Martin Reeves, Adam Job

    Source link

  • IBM, Meta and more than 50 others launch alliance to challenge dominant AI players

    IBM, Meta and more than 50 others launch alliance to challenge dominant AI players

    [ad_1]

    Discussions around the leading voices of the AI zeitgeist have not often included the old hands of computing like IBM, Intel, Sony Group, or Dell.

    But on Tuesday, the four corporations—along with the younger Meta, a host of top universities, as well as a collection of tech startups and foundations—announced an “AI Alliance” in an apparent attempt to challenge the perceived dominance of OpenAI, Microsoft, Google, and recently Amazon.

    “To some degree, but unfortunately, to a large degree, the last year of conversation and dialogue around AI has been focused on a very small number of institutions,” Darío Gil, a senior vice president at IBM and head of the corporation’s research lab, told Fortune. “The reality is that this field is much, much larger than that.”

    When asked who he was referring to when he said a “very small number of institutions,” Gil declined to specify: “You know who.”

    ‘Open’ AI

    The formation of the AI Alliance continues a longstanding debate among developers about the values of the “open” and “closed” development of artificial intelligence.

    Despite its name, OpenAI, the creator of ChatGPT, has kept its models, or mammoth AI algorithms, under lock and key. Developers can only access them only with permission from OpenAI, which counts Microsoft as its biggest backer. Google, another AI frontrunner, as well as Amazon, which recently unveiled its answer to ChatGPT and invested in buzzy AI startup Anthropic, have also not open-sourced, or let researchers fully download, their models. All tech giants have cited the reasons of competition and safety for why they’ve locked up their technology. 

    This tight-fistedness has led to consternation in the research community and among competing businesses. (In fact, competitors watched with glee as OpenAI’s corporate leadership fell into disarray in November.) “There’s been a lot of debate about: Should the future of AI be closed and proprietary? Or what is the role of open source, open science, and open innovation in the field?” Gil, the IBM executive, said.

    The AI Alliance falls into the latter camp. The group of over 50 has coalesced around a number of broad objectives, including the creation of common frameworks for evaluating the strength of AI algorithms, devotion of capital to AI research funds, and collaboration on open-source models.

    In addition to the corporate giants, other participants include the chip manufacturers AMD and Cerebras, AI startups like Hugging Face and Stability AI, and Ivy League universities like Yale, Cornell, and Dartmouth. 

    As an example, Gil pointed to IBM’s work with NASA on a recently open-sourced AI model trained on geospatial data, which he says can help track deforestation or predict crop yields. He also said IBM has committed approximately $100 million to universities to support AI research projects over the next five years, and that the computing titan has worked with Meta to build out an open-source toolkit for AI development.

    As for governance, Gil said that the alliance is still working out the details. The focus so far has been on building out a coalition and hashing out the organization’s objectives. Next steps include the formation of “technical working groups” for the more than 50 participants as well as the design of a governance structure that may lead to an external nonprofit.

    Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

    [ad_2]

    Ben Weiss

    Source link

  • CoreWeave backed by Fidelity and Jane Street at $7 billion valuation as cloud provider bolsters status as one of AI's hottest startups

    CoreWeave backed by Fidelity and Jane Street at $7 billion valuation as cloud provider bolsters status as one of AI's hottest startups

    [ad_1]

    CoreWeave, a cloud computing provider that’s among the hottest startups in the artificial intelligence race, said it closed a minority stake sale to investors led by Fidelity Management & Research Co.

    Investment Management Corp. of Ontario, Jane Street, JPMorgan Asset Management, Nat Friedman, Daniel Gross, Goanna Capital and Zoom Ventures also participated in the deal, CoreWeave said, confirming an earlier Bloomberg News report. The transaction values the company at $7 billion, said people with knowledge of the matter, asking not to be identified discussing confidential information.

    “Our explosive growth trajectory has been recognized by top-tier institutional investors, and this transaction highlights the differentiation our market-leading performance, significant technology advantage, and strong customer adoption is receiving in the market,” Michael Intrator, co-founder and CEO of CoreWeave, said in an emailed statement.

    The AI industry is at an inflection point, he added, noting that the company is playing a central role by providing “the most differentiated” AI infrastructure to customers.

    The Roseland, New Jersey-based company earlier this year said it secured a $2.3 billion debt financing facility led by Magnetar Capital and Blackstone that also featured Coatue, DigitalBridge Credit, and affiliates of BlackRock, PIMCO, and Carlyle.

    CoreWeave, which counts Nvidia Corp. as an investor, was an early adopter of Nvidia’s graphics chips for data centers, getting ahead of a wave of demand for powerful processors to run artificial intelligence applications. It’s building out data centers based on Nvidia’s chips to offer AI-related computing.

    Morgan Stanley advised CoreWeave on its minority stake sale.

    Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

    [ad_2]

    Gillian Tan, Bloomberg

    Source link

  • ‘We cannot let China get these chips’: Commerce Secretary Raimondo says more funding needed for AI export controls

    ‘We cannot let China get these chips’: Commerce Secretary Raimondo says more funding needed for AI export controls

    [ad_1]

    US Commerce Secretary Gina Raimondo said her department needs more money to stop China from catching up on cutting-edge semiconductors.

    “We cannot let China get these chips. Period,” she said at the Reagan National Defense Forum in Simi Valley, California, on Saturday. “We’re going to deny them our most cutting-edge technology.”

    To do that, Raimondo said the Commerce Department’s Bureau of Industry and Security, which manages export controls for the US, needs more funding from Congress.

    “I have a $200 million budget. That’s like the cost of a few fighter jets. Come on,” she said. “If we’re serious, let’s go fund this operation like it needs to be funded.”

    Raimondo said American companies will need to adapt to US national security priorities, including export controls that her department has placed on semiconductor exports.

    “I know there are CEOs of chip companies in this audience who were a little cranky with me when I did that because you’re losing revenue,” she said. “Such is life. Protecting our national security matters more than short-term revenue.”

    Raimondo called out Nvidia Corp., which designed chips specifically for the Chinese market after the US imposed its initial round of curbs in October 2022.

    “If you redesign a chip around a particular cut line that enables them to do AI, I’m going to control it the very next day,” Raimondo said.

    The Commerce Department updated the semiconductor curbs this fall to capture Nvidia’s made-for-China chips — and the company responded by designing three new AI components for the Asian country.

    Communication with China can help stabilize ties between the two countries, but “on matters of national security, we’ve got to be eyes wide open about the threat,” she said.

    “This is the biggest threat we’ve ever had and we need to meet the moment,” she said.

    — With assistance from Mackenzie Hawkins

      Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

      [ad_2]

      Peter Martin, Bloomberg

      Source link

    1. AI-generated nude images of teen girls spur families to push for protections: ‘We’re fighting for our children’

      AI-generated nude images of teen girls spur families to push for protections: ‘We’re fighting for our children’

      [ad_1]

      A mother and her 14-year-old daughter are advocating for better protections for victims after AI-generated nude images of the teen and other female classmates were circulated at a high school in New Jersey.

      Meanwhile, on the other side of the country, officials are investigating an incident involving a teenage boy who allegedly used artificial intelligence to create and distribute similar images of other students – also teen girls – that attend a high school in suburban Seattle, Washington.

      The disturbing cases have put a spotlight yet again on explicit AI-generated material that overwhelmingly harms women and children and is booming online at an unprecedented rate. According to an analysis by independent researcher Genevieve Oh that was shared with The Associated Press, more than 143,000 new deepfake videos were posted online this year, which surpasses every other year combined.

      Desperate for solutions, affected families are pushing lawmakers to implement robust safeguards for victims whose images are manipulated using new AI models, or the plethora of apps and websites that openly advertise their services. Advocates and some legal experts are also calling for federal regulation that can provide uniform protections across the country and send a strong message to current and would-be perpetrators.

      “We’re fighting for our children,” said Dorota Mani, whose daughter was one of the victims in Westfield, a New Jersey suburb outside of New York City. “They are not Republicans, and they are not Democrats. They don’t care. They just want to be loved, and they want to be safe.”

      The problem with deepfakes isn’t new, but experts say it’s getting worse as the technology to produce it becomes more available and easier to use. Researchers have been sounding the alarm this year on the explosion of AI-generated child sexual abuse material using depictions of real victims or virtual characters. In June, the FBI warned it was continuing to receive reports from victims, both minors and adults, whose photos or videos were used to create explicit content that was shared online.

      Several states have passed their own laws over the years to try to combat the problem, but they vary in scope. Texas, Minnesota and New York passed legislation this year criminalizing nonconsensual deepfake porn, joining Virginia, Georgia and Hawaii who already had laws on the books. Some states, like California and Illinois, have only given victims the ability to sue perpetrators for damages in civil court, which New York and Minnesota also allow.

      A few other states are considering their own legislation, including New Jersey, where a bill is currently in the works to ban deepfake porn and impose penalties — either jail time, a fine or both — on those who spread it.

      State Sen. Kristin Corrado, a Republican who introduced the legislation earlier this year, said she decided to get involved after reading an article about people trying to evade revenge porn laws by using their former partner’s image to generate deepfake porn.

      “We just had a feeling that an incident was going to happen,” Corrado said.

      The bill has languished for a few months, but there’s a good chance it might pass, she said, especially with the spotlight that’s been put on the issue because of Westfield.

      The Westfield event took place this summer and was brought to the attention of the high school on Oct. 20, Westfield High School spokesperson Mary Ann McGann said in a statement. McGann did not provide details on how the AI-generated images were spread, but Mani, the mother of one of the girls, said she received a call from the school informing her nude pictures were created using the faces of some female students and then circulated among a group of friends on the social media app Snapchat.

      The school hasn’t confirmed any disciplinary actions, citing confidentiality on matters involving students. Westfield police and the Union County Prosecutor’s office, who were both notified, did not reply to requests for comment.

      Details haven’t emerged about the incident in Washington state, which happened in October and is under investigation by police. Paula Schwan, the chief of the Issaquah Police Department, said they have obtained multiple search warrants and noted the information they have might be “subject to change” as the probe continues. When reached for comment, the Issaquah School District said it could not discuss the specifics because of the investigation, but said any form of bullying, harassment, or mistreatment among students is “entirely unacceptable.”

      If officials move to prosecute the incident in New Jersey, current state law prohibiting the sexual exploitation of minors might already apply, said Mary Anne Franks, a law professor at George Washington University who leads Cyber Civil Rights Initiative, an organization aiming to combat online abuses. But those protections don’t extend to adults who might find themselves in a similar scenario, she said.

      The best fix, Franks said, would come from a federal law that can provide consistent protections nationwide and penalize dubious organizations profiting from products and apps that easily allow anyone to make deepfakes. She said that might also send a strong signal to minors who might create images of other kids impulsively.

      President Joe Biden signed an executive order in October that, among other things, called for barring the use of generative AI to produce child sexual abuse material or non-consensual “intimate imagery of real individuals.” The order also directs the federal government to issue guidance to label and watermark AI-generated content to help differentiate between authentic and material made by software.

      Citing the Westfield incident, U.S. Rep. Tom Kean, Jr., a Republican who represents the town, introduced a bill on Monday that would require developers to put disclosures on AI-generated content. Among other efforts, another federal bill introduced by U.S. Rep. Joe Morelle, a New York Democrat, would make it illegal to share deepfake porn images online. But it hasn’t advanced for months due to congressional gridlock.

      Some argue for caution — including the American Civil Liberties Union, the Electronic Frontier Foundation and The Media Coalition, an organization that works for trade groups representing publishers, movie studios and others — saying that careful consideration is needed to avoid proposals that may run afoul of the First Amendment.

      “Some concerns about abusive deepfakes can be addressed under existing cyber harassment” laws, said Joe Johnson, an attorney for ACLU of New Jersey. “Whether federal or state, there must be substantial conversation and stakeholder input to ensure any bill is not overbroad and addresses the stated problem.”

      Mani said her daughter has created a website and set up a charity aiming to help AI victims. The two have also been in talks with state lawmakers pushing the New Jersey bill and are planning a trip to Washington to advocate for more protections.

      “Not every child, boy or girl, will have the support system to deal with this issue,” Mani said. “And they might not see the light at the end of the tunnel.”

      __

      AP reporters Geoff Mulvihill and Matt O’Brien contributed from Cherry Hill, New Jersey and Providence, Rhode Island.

      [ad_2]

      Haleluya Hadero, The Associated Press

      Source link

    2. Elon Musk warns ‘something scared’ OpenAI chief scientist Ilya Sutskever as CEO Sam Altman’s return fails to answer key questions

      Elon Musk warns ‘something scared’ OpenAI chief scientist Ilya Sutskever as CEO Sam Altman’s return fails to answer key questions

      [ad_1]

      Elon Musk played a big role in persuading Ilya Sutskever to join OpenAI as chief scientist in 2015. Now the Tesla CEO wants to know what he saw there that scared him so much.

      Sutskever, whom Musk recently described as a “good human” with a “good heart”—and the “linchpin for OpenAI being successful”—served on the OpenAI board that fired CEO Sam Altman two Fridays ago; indeed, Sutskever informed Altman of his dismissal. Since then, however, the board has been revamped and Altman reinstated, with investors led by Microsoft pushing for the changes.

      Sutskever himself backtracked on Monday, writing on X, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI.” 

      But Musk and other tech elites—including ones who mocked the board for firing Altman—are still curious about what Sutskever saw. 

      Late on Thursday, venture capitalist Marc Andreessen, who has ridiculed “doomers” who fear AI’s threat to humanity, posted to X, “Seriously though — what did Ilya see?” Musk replied a few hours later, “Yeah! Something scared Ilya enough to want to fire Sam. What was it?”

      That remains a mystery. The board gave only vague reasons for firing Atlman. Not much has been revealed since.

      ‘Such drastic action’

      OpenAI’s mission is to develop artificial general intelligence (AGI) and ensure it “benefits all of humanity.” AGI refers to a system that can match humans when faced with an unfamiliar task. 

      OpenAI’s unusual corporate structure put a nonprofit board higher than the capped-profit company, allowing the board to fire the CEO if, for instance, it felt the commercialization of potentially dangerous AI capabilities was moving at an unsafe speed.

      Early on Thursday, Reuters reported that several OpenAI researchers had warned the board in a letter of a new AI that could threaten humanity. OpenAI, after being contacted by Reuters, then wrote an internal email acknowledging a project called Q* (pronounced Q-Star), which some staffers felt might be a breakthrough in the company’s AGI quest. Q* reportedly can ace basic mathematical tests, suggesting an ability to reason, as opposed ChatGPT’s more predictive behavior.

      Musk has longed warned of the potential dangers to humanity from artificial intelligence, though he also sees its upsides and now offers a ChatGPT rival called Grok through his startup xAI. He cofounded OpenAI in 2015 and helped lure key talent including Sutskever, but he left a few years later on a sour note. He later complained that the onetime nonprofit—which he had hoped would serve as a counterweight to Google’s AI dominance—had instead become a “closed source, maximum-profit company effectively controlled by Microsoft.”

      Last weekend, he weighed in on the OpenAI board’s decision to fire Altman, writing: “Given the risk and power of advanced AI, the public should be informed of why the board felt they had to take such drastic action.” 

      When an X user suggested there might be a “bombshell variable” unknown to the public, Musk replied, “Exactly.”

      Sutskever, after his backtracking on Monday, responded to the return of Altman by writing on Wednesday, “There exists no sentence in any language that conveys how happy I am.”  

      Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

      [ad_2]

      Steve Mollman

      Source link

    3. OpenAI’s ‘unusual’ board can make unilateral decisions without asking permission from anyone—like deep-pocketed backer Microsoft and Satya Nadella

      OpenAI’s ‘unusual’ board can make unilateral decisions without asking permission from anyone—like deep-pocketed backer Microsoft and Satya Nadella

      [ad_1]

      OpenAI’s dramatic decision to fire its CEO Sam Altman on Friday, and the days-long power struggle that followed, was only possible thanks to the unusual power held by its directors. The ChatGPT developer’s extraordinarily powerful board doesn’t answer to shareholders or an ownership group, but instead to none other than all of mankind. “Our primary fiduciary duty is to humanity,” OpenAI’s charter reads. 

      Microsoft CEO Satya Nadella, who’s taken a central role in negotiating OpenAI and Sam Altman’s future, now wants governance changes at the pioneering AI startup. “Surprises are bad,” he told Bloomberg during an interview Monday evening. 

      The surprise, in this case, is OpenAI’s firing of Altman on Friday, without informing Nadella until a minute before it went public, reports Axios. OpenAI’s unique corporate structure doesn’t give deep-pocketed backers like Microsoft, which has invested $13 billion in the AI developer, seats on its board.

      OpenAI’s board “can essentially take decisions unilaterally” without conferring with investors, says Karen Brenner, executive director of law and business initiatives at NYU’s Stern School of Business.  

      Nadella, who has in the meantime committed to hiring Altman at Microsoft, says he plans to remain in business with OpenAI but will now push for changes to its board structure.

      In normal for-profit entities, investors usually have some ability to influence strategy, whether through governance rights and board seats. Not at OpenAI. “It’s unusual that when you form an entity to pursue a strategy, which requires an unusual amount of capital, that the people who provide the capital wouldn’t have some degree of voice or control or oversight of the capital that they provide,” Brenner says. 

      Why is OpenAI’s board so powerful?

      OpenAI’s unique board structure comes from its founding as a nonprofit. In 2015, Altman, Greg Brockman, and current board member Ilya Sutskever, alongside other partners including Tesla CEO Elon Musk, started OpenAI as an AI research lab. By 2019, OpenAI’s leadership realized it would need to raise money—and likely huge sums of it—to fund its research. To make that possible, OpenAI created a capped for-profit subsidiary. 

      A capped for-profit entity is already unusual. Companies are rarely in the habit of preemptively limiting their profits. But as a division of a nonprofit, whose goal is to “ensure [artificial intelligence] is used for the benefit of all,” OpenAI decided it didn’t want investors to have an unfettered profit motive. 

      “Part of the objective was to limit the financial upside potential and also keep close control over the social implications of this technology,” Brenner says. 

      But OpenAI’s massive success may be this strange structure’s undoing. The technologies the for-profit arm developed were so advanced that it eventually attracted the multibillion dollar investments from Microsoft and the Silicon Valley VCs who poured money into OpenAI. As it became more successful, investors and executives alike wanted to capitalize on the commercial opportunity of their work, according to Vasant Dhar, a data science professor and AI researcher at NYU’s Stern School of Business. 

      ”OpenAI has just been a victim of its own success,” Dhar says. “I don’t know whether they really expected to be this far along so quickly—but they are.”

      OpenAI’s board wields such power within the company because it answers to no one and isn’t bound by a fiduciary duty to help shareholders get a return on their investment. Even other big name investors, including top venture capital firms like Sequoia Capital, a16z, and Tiger Global don’t have a say in the company’s decision making. 

      These VCs, like Microsoft, aren’t used to being bystanders in their investments and may start to exert more influence through other channels. They could try to exert private or public pressure, as a16z founder Marc Andreesen did by tweeting cryptic messages. Investors could pull future funding commitments, although that would depend on the terms of each of their original deals. And Microsoft has an even bigger trump card: withholding access to the computing resources that power OpenAI’s tech. 

      “Usually the people with the money have a lot to say,” Brenner says. At OpenAI “they don’t technically have a lot to say in terms of the governance structure, but they have a lot to say because they provide the capital.” 

      Can OpenAI’s investors do anything?

      OpenAI’s board removed Altman after alleging that he was not “consistently candid” with his communications, without providing details. Board chair and OpenAI president Greg Brockman wasn’t aware the meeting to fire Altman was going to take place, according to a post on X. Even that is unusual in its own right, as board chairs usually dictate when and where board meetings will happen. In fact, Brockman was removed from the board by his fellow directors shortly after Altman was fired. He promptly quit upon hearing the news.

      Yet the outcry around the firing then led to days of tense negotiation, as OpenAI’s board tried to figure out how to bring Altman and Brockman back into the organization. Newly appointed interim CEO Mira Murati pushed to rehire the two in different roles, according to Bloomberg. Instead, the board made another surprising decision by hiring yet another interim CEO to replace Murati: Twitch founder Emmett Shear.

      The board now faces a full mutiny from its employees. More than 700 of OpenAI’s roughly 750 employees have signed a letter stating they will quit if the board does not resign and reinstate Altman and Brockman. 

      The New York Times reports that Sutskever was concerned that Altman was moving too quickly to bring tech to market, without considering the risks. He has since changed his mind, throwing his support behind Altman’s return. 

      Because OpenAI’s investors don’t have a say in its governance, they have limited recourse to remove board members, which they would have been able to do in a more traditional structure. Normally, if a board takes decisions that shareholders deemed ineffective they can get voted out of their role. In OpenAI’s case this isn’t permitted, strengthening the board’s hand. 

      The board can even take an unpopular decision, like it did in firing Altman, that risks a wholesale defection from hundreds of employees. Ordinarily, a board with a fiduciary responsibility to shareholders wouldn’t make a decision that could risk such a brain drain. If “the talent pool walks out the door or is fired, then it calls the whole enterprise into question,” Brenner says. “That’s going to leave lots of questions going forward. Where does technology reside? And what can the executives who end up leaving the company do in another configuration?”

      OpenAI’s investors are unlikely to be happy with such a major talent exodus. The board “basically handed their IP to Microsoft on a platter,” Dhar says. 

      To Bloomberg, Nadella said Microsoft would welcome any former OpenAI employees. “Anyone else who is at OpenAI and wants to go somewhere else, we want them to come to Microsoft,” he said. 

      Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

      [ad_2]

      Paolo Confino

      Source link

    4. Sam Altman returned to OpenAI HQ and could be reinstated as CEO soon. Elon Musk says ‘the public should be informed’ why he was fired in the first place

      Sam Altman returned to OpenAI HQ and could be reinstated as CEO soon. Elon Musk says ‘the public should be informed’ why he was fired in the first place

      [ad_1]

      It’s been a tumultuous weekend for OpenAI and anyone who follows the field of artificial intelligence. After the OpenAI board fired CEO Sam Altman on Friday, investors who’d been taken off guard by the move raced to reinstate him. 

      On Sunday afternoon, Altman was back in the OpenAI headquarters, Bloomberg reported, and the decision to reinstate him could be made shortly. Altman shared a photo of himself on X wearing a guest badge and making a face, writing, “first and last time i ever wear one of these.”

      But even if he is reinstated, questions remain about why the board fired him in the first place. The board gave only vague reasons on Friday.

      Among those wanting to know is Tesla CEO Elon Musk, who wrote on X: “Given the risk and power of advanced AI, the public should be informed of why the board felt they had to take such drastic action.”

      When an X user said it felt as if there were a “bombshell variable” the public was unaware of behind Altman’s firing, Musk replied, “Exactly.”

      And AI expert Gary Marcus worried that it did not bode well that the OpenAI board—presumably in control of the capped-profit company and with an eye on the nonprofit mission—was apparently overpowered as investors raced to get Altman back into his role.

      OpenAI Chief Operating Officer Brad Lightcap told Bloomberg, “we have had multiple conversations with the board to try to better understand the reasons and process behind their decision,” which took him and others at the company by surprise.

      Eric Newcomer, who hosts a technology podcast, wrote in his newsletter that Altman should not be given too much power.

      “The public should not want, nor should the OpenAI board give Altman unbridled power to run OpenAI as he pleases,” he wrote. “Altman has a history of fractious corporate breakups…These board members are not the first people to question Altman’s integrity. They’ve just done so in public.”  

      He mentioned among others the power struggle with Musk, who was an OpenAI cofounder in 2015 and helped attract key talent, but left a few years later on a sour note. Musk later complained about the onetime nonprofit that he meant to serve as a counterweight to Google becoming a “closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.” 

      Musk was key to recruiting Ilya Sutskever, the chief scientist at the center of OpenAI’s leadership shakeup. Sutskever is on the current board and is the one who informed Altman of his dismissal, according to Greg Brockman, who quit as president in protest of Altman’s firing.

      The board agreed to step down, Bloomberg reported, and was vetting new candidates to serve as directors.

      How the board might be reshaped remains to be seen. According to sources Bloomberg spoke with, among the new members will be Bret Taylor, the former co-CEO of Salesforce.

      Another possibility is a board seat going to Microsoft, whose CEO Satya Nadella was reportedly blindsided by the decision to fire Altman.

      The software giant has committed at least $13 billion to OpenAI since 2019 but only delivered some of that. It’s questionable whether OpenAI could continue operating without the continual cash infusions and computing power provided by Microsoft, which means Microsoft wields considerable power with or without a presence on the board.

      Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

      [ad_2]

      Steve Mollman

      Source link

    5. Race to reinstate Sam Altman as OpenAI CEO reaches impasse over new role and makeup of the board that ousted him

      Race to reinstate Sam Altman as OpenAI CEO reaches impasse over new role and makeup of the board that ousted him

      [ad_1]

      A group of OpenAI executives and investors racing to get Sam Altman reinstated to his role as chief executive officer have reached an impasse over the makeup and role of the board, according to people familiar with the negotiations. The decision to restore Altman’s role as CEO could come quickly, though talks are fluid and still ongoing. 

      At midday Sunday, Altman and former President Greg Brockman were in the startup’s headquarters, according to people familiar with the matter.

      OpenAI leaders pushing for the board to resign and to reinstate Altman include Interim CEO Mira Murati, Chief Strategy Officer Jason Kwon and Chief Operating Officer Brad Lightcap, according to a person with knowledge of the discussions. 

      Altman, who was fired Friday, is open to returning but wants to see governance changes — including the removal of existing board members, said the people, who asked not to be identified because the negotiations are private. After facing intense pressure following their decision to fire Altman Friday, the board agreed in principle to step down, but have so far refused to officially do so. The directors have been vetting candidates for new directors. 

      At the center of the high-stakes negotiations between the executives, investors and the board is Microsoft Corp. CEO Satya Nadella. Nadella has been leading the charge on talks between the different factions, some of the people said. Microsoft is OpenAI’s biggest investor, with $13 billion invested in the company. 

      Bret Taylor, the former co-CEO of Salesforce Inc., will be on the new board, several people said. Another possible addition is an executive from Redmond, Washington-based Microsoft — but it’s unclear whether the software giant would take a board seat despite its large investment, some of the people said. 

      The chaos began on Friday, when the directors led by OpenAI Chief Scientist Ilya Sutskever dismissed Altman, saying “he was not consistently candid in his communications with the board.” In a memo to staff Saturday, Lightcap said the decision to fire the CEO “was not made in response to malfeasance” or the company’s financial or safety practices.

      Altman’s ousting “took us all by surprise,” Lightcap said in the memo, adding that  “we have had multiple conversations with the board to try to better understand the reasons and process behind their decision.” 

      One longstanding issue that has divided the company was Altman’s drive to turn OpenAI, which got its start as a nonprofit organization, into a successful business — and how quickly he wanted the company to crank out products and sign up customers. That ran headlong into board member concerns over the safety of artificial intelligence tools capable of generating text, images and even computer code with minimal prompting.

      Altman is keeping his options open, according to people familiar with his thinking, and is interested in returning to OpenAI, starting a new company or both. 

      Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

      [ad_2]

      Emily Chang, Edward Ludlow, Rachel Metz, Dina Bass, Bloomberg

      Source link

    6. What Sam Altman said about AI at a CEO summit the day before OpenAI ousted him as CEO

      What Sam Altman said about AI at a CEO summit the day before OpenAI ousted him as CEO

      [ad_1]

      Sam Altman is out as CEO of OpenAI after a “boardroom coup” on Friday that shook the tech industry. Some are likening his ouster to Steve Jobs being fired at Apple, a sign of how momentous the shakeup feels amid an AI boom that has rejuvenated Silicon Valley.

      Altman, of course, had much to do with that boom, caused by OpenAI’s release of ChatGPT to the public late last year. Since then, he’s crisscrossed the globe talking to world leaders about the promise and perils of artificial intelligence. Indeed, for many he’s become the face of AI. 

      Where exactly things go from here remains uncertain. In the latest twists, some reports suggest Altman could return to OpenAI and others suggest he’s already planning a new startup. 

      But either way, his ouster feels momentous, and, given that, his last appearance as OpenAI’s CEO merits attention. It occurred on Thursday at the APEC CEO summit in San Francisco. The beleaguered city, where OpenAI is based, hosted the Asia-Pacific Economic Cooperation summit this week, having first  cleared away embarrassing encampments of homeless people (though it still suffered embarrassment when robbers stole a Czech news crew’s equipment).

      Altman answered questions onstage from, somewhat ironically, moderator Laurene Powell Jobs, the billionaire widow of the late Apple cofounder. She asked Altman how policymakers can strike the right balance between regulating AI companies while also being open to evolving as the technology itself evolves.

      Altman started by noting that he’d had dinner this summer with historian and author Yuval Noah Harari, who has issued stark warnings about the dangers of artificial intelligence to democracies, even suggesting tech executives should face 20 years in jail for letting AI bots sneakily pass as humans. 

      The Sapiens author, Altman said, “was very concerned, and I understand it. I really do understand why if you have not been closely tracking the field, it feels like things just went vertical…I think a lot of the world has collectively gone through a lurch this year to catch up.”

      He noted that people can now talk to ChatGPT, saying it’s “like the Star Trek computer I was always promised.” The first time people use such products, he said, “it feels much more like a creature than a tool,” but eventually they get used to it and see its limitations (as some embarrassed lawyers have). 

      He said that while AI hold the potential to do wonderful things like cure diseases on the one had, on the other, “How do we make sure it is a tool that has proper safeguards as it gets really powerful?” 

      Today’s AI tools, he said, are “not that powerful,” but “people are smart and they see where it’s going. And even though we can’t quite intuit exponentials well as a species much, we can tell when something’s gonna keep going, and this is going to keep going.” 

      The questions, he said, are what limits on the technology will be put in place, who will decide those, and how they’ll be enforced internationally. 

      Grappling with those questions “has been a significant chunk of my time over the last year,” he noted, adding, “I really think the world is going to rise to the occasion and everybody wants to do the right thing.”

      Today’s technology, he said, doesn’t need heavy regulation. “But at some point—when the model can do like the equivalent output of a whole company and then a whole country and then the whole world—maybe we do want some collective global supervision of that and some collective decision-making.”

      For now, Altman said, it’s hard to “land that message” and not appear to be suggesting policymakers should ignore present harms. He also doesn’t want to suggest that regulators should go after AI startups or open-source models, or bless AI leaders like OpenAI with “regulatory capture.” 

      “We are saying, you know, ‘Trust us, this is going to get really powerful and really scary. You’ve got to regulate it later’—very difficult needle to thread through all of that.”

      Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

      [ad_2]

      Steve Mollman

      Source link

    7. What Elon Musk has said about Ilya Sutskever, the chief scientist at the center of OpenAI’s leadership upheaval 

      What Elon Musk has said about Ilya Sutskever, the chief scientist at the center of OpenAI’s leadership upheaval 

      [ad_1]

      OpenAI just underwent an abrupt, dramatic leadership shakeup. A key figure at the center of the turmoil is also a big reason that Tesla CEO Elon Musk is no longer friends with Google cofounder and former CEO Larry Page. 

      On Friday, OpenAI announced that cofounder and CEO Sam Altman had been fired by the board of directors, and that Mira Murati, the chief technology officer, would serve as interim CEO. The maker of the AI chatbot ChatGPT claimed that Altman was “was not consistently candid” with the board, without providing details.

      It also said that another cofounder, chairman Greg Brockman, would be removed from that role while staying at the company. But Brockman then indicated that he would quit.

      That meant that there was only remaining member of the core founding group behind OpenAI: Ilya Sutskever, the company’s chief scientist. A Russian-born Israeli-Canadian, Sutskever is a leading expert in deep learning, a subset of machine learning. He’s also on OpenAI’s board.

      Ilya Sutskever, Russian Israeli-Canadian computer scientist and cofounder and chief scientist of OpenAI, speaks at Tel Aviv University on June 5.

      JACK GUEZ/AFP via Getty Images

      “Last night, Sam got a text from Ilya asking to talk at noon Friday,” Brockman wrote on X late Friday. “Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon.”

      Central to the shakeup was the issue of AI safety, according to anonymous sources who spoke to Bloomberg, with Altman and Sutskever disagreeing on how quickly to commercialize generative AI products and the steps needed to reduce possible public harm.

      Musk’s tussle with Google over Sutskever

      Musk has a history with both OpenAI, which he played a key role in starting, and with Sutskever, whom he persuaded to join OpenAI as a cofounder and chief scientist in 2015, rather than stay at Google. On a Nov. 9 episode of the Lex Fridman Podcast, Musk described how pivotal Sutskever was to the success of OpenAI. 

      In 2015, Musk said, he worked hard to recruit Sutskever to OpenAI, which he then envisioned as open-source nonprofit that would act as a counterweight to Google’s power in artificial intelligence. Musk has long warned of the potential dangers of AI. 

      Meanwhile Demis Hassabis, cofounder and CEO of DeepMind, which Google acquired in 2014, was trying to persuade Sutskever that Google was the best place for him.  

      “It was mostly Demis on one side and me on the other, both trying to recruit Ilya, and Ilya went back and forth,” said Musk. “Finally he did agree to join openAI. That was one of the toughest recruiting battles I’ve ever had, but that was really the linchpin for OpenAI being successful.”

      Musk described himself as the “prime move behind OpenAI, in the sense that it was created because of discussions that I had with [Google cofounder] Larry Page back when he and I were friends.” 

      He described staying at Page’s house and talking to him about AI safety.

      “Larry did not care about AI safety, or at least at the time he didn’t,” Musk said. “At one point he called me a speciesist for being pro-human. And I’m like, ‘Well, what team are you on Larry?’”

      Musk said that what concerned him was that Google had acquired DeepMind and had “probably two-thirds of all the AI researchers in the world. They had basically infinite money and compute, and the guy in charge, Larry Page, did not care about safety.” 

      When Fridman suggested Musk and Page might become friends again, Musk replied, “I’d like to be friends with Larry again. Really the breaking of the friendship was over OpenAI, and specifically I think the key moment was recruiting Ilya Sutskever.” 

      Musk called Sutskever “a good human—smart, good heart.” 

      Disappointment with OpenAI

      Musk left OpenAI’s board in 2018 after a power struggle. In the years since he’s expressed disgust with its direction under Altman, especially after OpenAI accepted billions in investments from Microsoft and moved away from its nonprofit status.  

      Altman has called Musk a “jerk” but also recently acknowledged his role in OpenAI’s founding. 

      “Elon was definitely a talent magnet and attention magnet, for sure, and also just like has some real superpowers that were super helpful to us in those early days,” he said on the In Good Company podcast in September.

      Musk, for his part, tweeted earlier this year, “OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.” 

      Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

      [ad_2]

      Steve Mollman

      Source link