ReportWire

Tag: generative ai

  • What does Sam Altman’s firing — and quick reinstatement — mean for the future of AI?

    What does Sam Altman’s firing — and quick reinstatement — mean for the future of AI?

    [ad_1]

    NEW YORK — It’s been quite a week for ChatGPT-maker OpenAI — and co-founder Sam Altman.

    Altman, who helped start OpenAI as a nonprofit research lab back in 2015, was removed as CEO Friday in a sudden and mostly unexplained exit that stunned the industry. And while his chief executive title was swiftly reinstated just days later, a lot of questions are still up in the air.

    If you’re just catching up on the OpenAI saga and what’s at stake for the artificial intelligence space as a whole, you’ve come to the right place. Here’s a rundown of what you need to know.

    Altman is co-founder of OpenAI, the San Francisco-based company behind ChatGPT (yes, the chatbot that’s seemingly everywhere today — from schools to health care ).

    The explosion of ChatGPT since its arrival one year ago propelled Altman into the spotlight of the rapid commercialization of generative AI — which can produce novel imagery, passages of text and other media. And as he became Silicon Valley’s most sought-after voice on the promise and potential dangers of this technology, Altman helped transform OpenAI into a world-renowned startup.

    But his position at OpenAI hit some rocky turns in a whirlwind that was the past week. Altman was fired as CEO Friday — and days later, he was back on the job with a new board of directors.

    Within that time, Microsoft, which has invested billions of dollars in OpenAI and has rights to its existing technology, helped drive Altman’s return, quickly hiring him as well as another OpenAI co-founder and former president, Greg Brockman, who quit in protest after the CEO’s ousting. Meanwhile, hundreds of OpenAI employees threatened to resign.

    Both Altman and Brockman celebrated their returns to the company in posts on X, the platform formerly known as Twitter, early Wednesday.

    There’s a lot that remains unknown about Altman’s initial ousting. Friday’s announcement said he was “not consistently candid in his communications” with the then-board of directors, which refused to provide more specific details.

    Regardless, the news sent shockwaves throughout the AI world — and, because OpenAI and Altman are such leading players in this space, may raise trust concerns around a burgeoning technology that many people still have questions about.

    “The OpenAI episode shows how fragile the AI ecosystem is right now, including addressing AI’s risks,” said Johann Laux, an expert at the Oxford Internet Institute focusing on human oversight of artificial intelligence.

    The turmoil also accentuated the differences between Altman and members of the company’s previous board, who have expressed various views on the safety risks posed by AI as the technology advances.

    Multiple experts add that this drama highlights how it should be governments — and not big tech companies — that should be calling the shots on AI regulation, particularly for fast-evolving technologies like generative AI.

    “The events of the last few days have not only jeopardized OpenAI’s attempt to introduce more ethical corporate governance in the management of their company, but it also shows that corporate governance alone, even when well-intended, can easily end up cannibalized by other corporate’s dynamics and interests,” said Enza Iannopollo, principal analyst at Forrester.

    The lesson, Iannopollo said, is that companies can’t alone deliver the level of safety and trust in AI that society needs. “Rules and guardrails, designed with companies and enforced by regulators with rigor, are crucial if we are to benefit from AI,” he added.

    Unlike traditional AI, which processes data and completes tasks using predetermined rules, generative AI (including chatbots like ChatGPT) can create something new.

    Tech companies are still leading the show when it comes to governing AI and its risks, while governments around the world work to catch up.

    In the European Union, negotiators are putting the final touches on what’s expected to be the world’s first comprehensive AI regulations. But they’ve reportedly been bogged down over whether and how to include the most contentious and revolutionary AI products, the commercialized large-language models that underpin generative AI systems including ChatGPT.

    Chatbots were barely mentioned when Brussels first laid out its initial draft legislation in 2021, which focused on AI with specific uses. But officials have been racing to figure out how to incorporate these systems, also known as foundation models, into the final version.

    Meanwhile, in the U.S., President Joe Biden signed an ambitious executive order last month seeking to balance the needs of cutting-edge technology companies with national security and consumer rights.

    The order — which will likely need to be augmented by congressional action — is an initial step that is meant to ensure that AI is trustworthy and helpful, rather than deceptive and destructive. It seeks to steer how AI is developed so that companies can profit without putting public safety in jeopardy.

    [ad_2]

    Source link

  • What does Sam Altman’s firing — and quick reinstatement — mean for the future of AI?

    What does Sam Altman’s firing — and quick reinstatement — mean for the future of AI?

    [ad_1]

    NEW YORK — It’s been quite a week for ChatGPT-maker OpenAI — and co-founder Sam Altman.

    Altman, who helped start OpenAI as a nonprofit research lab back in 2015, was removed as CEO Friday in a sudden and mostly unexplained exit that stunned the industry. And while his chief executive title was swiftly reinstated just days later, a lot of questions are still up in the air.

    If you’re just catching up on the OpenAI saga and what’s at stake for the artificial intelligence space as a whole, you’ve come to the right place. Here’s a rundown of what you need to know.

    Altman is co-founder of OpenAI, the San Francisco-based company behind ChatGPT (yes, the chatbot that’s seemingly everywhere today — from schools to health care ).

    The explosion of ChatGPT since its arrival one year ago propelled Altman into the spotlight of the rapid commercialization of generative AI — which can produce novel imagery, passages of text and other media. And as he became Silicon Valley’s most sought-after voice on the promise and potential dangers of this technology, Altman helped transform OpenAI into a world-renowned startup.

    But his position at OpenAI hit some rocky turns in a whirlwind that was the past week. Altman was fired as CEO Friday — and days later, he was back on the job with a new board of directors.

    Within that time, Microsoft, which has invested billions of dollars in OpenAI and has rights to its existing technology, helped drive Altman’s return, quickly hiring him as well as another OpenAI co-founder and former president, Greg Brockman, who quit in protest after the CEO’s ousting. Meanwhile, hundreds of OpenAI employees threatened to resign.

    Both Altman and Brockman celebrated their returns to the company in posts on X, the platform formerly known as Twitter, early Wednesday.

    There’s a lot that remains unknown about Altman’s initial ousting. Friday’s announcement said he was “not consistently candid in his communications” with the then-board of directors, which refused to provide more specific details.

    Regardless, the news sent shockwaves throughout the AI world — and, because OpenAI and Altman are such leading players in this space, may raise trust concerns around a burgeoning technology that many people still have questions about.

    “The OpenAI episode shows how fragile the AI ecosystem is right now, including addressing AI’s risks,” said Johann Laux, an expert at the Oxford Internet Institute focusing on human oversight of artificial intelligence.

    The turmoil also accentuated the differences between Altman and members of the company’s previous board, who have expressed various views on the safety risks posed by AI as the technology advances.

    Multiple experts add that this drama highlights how it should be governments — and not big tech companies — that should be calling the shots on AI regulation, particularly for fast-evolving technologies like generative AI.

    “The events of the last few days have not only jeopardized OpenAI’s attempt to introduce more ethical corporate governance in the management of their company, but it also shows that corporate governance alone, even when well-intended, can easily end up cannibalized by other corporate’s dynamics and interests,” said Enza Iannopollo, principal analyst at Forrester.

    The lesson, Iannopollo said, is that companies can’t alone deliver the level of safety and trust in AI that society needs. “Rules and guardrails, designed with companies and enforced by regulators with rigor, are crucial if we are to benefit from AI,” he added.

    Unlike traditional AI, which processes data and completes tasks using predetermined rules, generative AI (including chatbots like ChatGPT) can create something new.

    Tech companies are still leading the show when it comes to governing AI and its risks, while governments around the world work to catch up.

    In the European Union, negotiators are putting the final touches on what’s expected to be the world’s first comprehensive AI regulations. But they’ve reportedly been bogged down over whether and how to include the most contentious and revolutionary AI products, the commercialized large-language models that underpin generative AI systems including ChatGPT.

    Chatbots were barely mentioned when Brussels first laid out its initial draft legislation in 2021, which focused on AI with specific uses. But officials have been racing to figure out how to incorporate these systems, also known as foundation models, into the final version.

    Meanwhile, in the U.S., President Joe Biden signed an ambitious executive order last month seeking to balance the needs of cutting-edge technology companies with national security and consumer rights.

    The order — which will likely need to be augmented by congressional action — is an initial step that is meant to ensure that AI is trustworthy and helpful, rather than deceptive and destructive. It seeks to steer how AI is developed so that companies can profit without putting public safety in jeopardy.

    [ad_2]

    Source link

  • What the events leading up to Sam Altman’s reinstatement at OpenAI mean for the industry’s future

    What the events leading up to Sam Altman’s reinstatement at OpenAI mean for the industry’s future

    [ad_1]

    NEW YORK — It’s been quite a week for ChatGPT-maker OpenAI — and co-founder Sam Altman.

    Altman, who helped start OpenAI as a nonprofit research lab back in 2015, was removed as CEO Friday in a sudden and mostly unexplained exit that stunned the industry. And while his chief executive title was swiftly reinstated just days later, a lot of questions are still up in the air.

    If you’re just catching up on the OpenAI saga and what’s at stake for the artificial intelligence space as a whole, you’ve come to the right place. Here’s a rundown of what you need to know.

    Altman is co-founder of OpenAI, the San Francisco-based company behind ChatGPT (yes, the chatbot that’s seemingly everywhere today — from schools to health care ).

    The explosion of ChatGPT since its arrival one year ago propelled Altman into the spotlight of the rapid commercialization of generative AI — which can produce novel imagery, passages of text and other media. And as he became Silicon Valley’s most sought-after voice on the promise and potential dangers of this technology, Altman helped transform OpenAI into a world-renowned startup.

    But his position at OpenAI hit some rocky turns in a whirlwind that was the past week. Altman was fired as CEO Friday — and days later, he was back on the job with a new board of directors.

    Within that time, Microsoft, which has invested billions of dollars in OpenAI and has rights to its existing technology, helped drive Altman’s return, quickly hiring him as well as another OpenAI co-founder and former president, Greg Brockman, who quit in protest after the CEO’s ousting. Meanwhile, hundreds of OpenAI employees threatened to resign.

    Both Altman and Brockman celebrated their returns to the company in posts on X, the platform formerly known as Twitter, early Wednesday.

    There’s a lot that remains unknown about Altman’s initial ousting. Friday’s announcement said he was “not consistently candid in his communications” with the then-board of directors, which refused to provide more specific details.

    Regardless, the news sent shockwaves throughout the AI world — and, because OpenAI and Altman are such leading players in this space, may raise trust concerns around a burgeoning technology that many people still have questions about.

    “The OpenAI episode shows how fragile the AI ecosystem is right now, including addressing AI’s risks,” said Johann Laux, an expert at the Oxford Internet Institute focusing on human oversight of artificial intelligence.

    The turmoil also accentuated the differences between Altman and members of the company’s previous board, who have expressed various views the safety risks posed by AI as the technology advances.

    Multiple experts add that this drama highlights how it should be governments — and not big tech companies — that should be calling the shots on AI regulation, particularly for fast-evolving technologies like generative AI.

    “The events of the last few days have not only jeopardized OpenAI’s attempt to introduce more ethical corporate governance in the management of their company, but it also shows that corporate governance alone, even when well intended, can easily end up cannibalized by other corporate’s dynamics and interests,” said Enza Iannopollo, principal analyst at Forrester.

    The lesson, Iannopollo said, is that companies can’t alone deliver the level of safety and trust in AI that society needs. “Rules and guardrails, designed with companies and enforced by regulators with rigor, are crucial if we are to benefit from AI,” he added.

    Unlike traditional AI, which processes data and completes tasks using predetermined rules, generative AI (including chatbots like ChatGPT) can create something new.

    Tech companies are still leading the show when it comes to governing AI and its risks, while governments around the world work to catch up.

    In the European Union, negotiators are putting the final touches on what’s expected to be the world’s first comprehensive AI regulations. But they’ve reportedly been bogged down over whether and how to include the most contentious and revolutionary AI products, the commercialized large-language models that underpin generative AI systems including ChatGPT.

    Chatbots were barely mentioned when Brussels first laid out its initial draft legislation in 2021, which focused on AI with specific uses. But officials have been racing to figure out how to incorporate these systems, also known as foundation models, into the final version.

    Meanwhile, in the U.S., President Joe Biden signed an ambitious executive order last month seeking to balance the needs of cutting-edge technology companies with national security and consumer rights.

    The order — which will likely need to be augmented by congressional action — is an initial step that is meant to ensure that AI is trustworthy and helpful, rather than deceptive and destructive. It seeks to steer how AI is developed so that companies can profit without putting public safety in jeopardy.

    [ad_2]

    Source link

  • OpenAI brings back Sam Altman as CEO just days after his firing unleashed chaos

    OpenAI brings back Sam Altman as CEO just days after his firing unleashed chaos

    [ad_1]

    The ousted leader of ChatGPT maker OpenAI is returning to the company that fired him just days ago, culminating a short but chaotic power struggle that shocked the tech industry and underscored the conflicts around how to safely build artificial intelligence.

    And OpenAI co-founder Sam Altman will answer to a different board of directors than the one that fired him Friday. The San Francisco-based company said late Tuesday that it “reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board.”

    It will be led by former Salesforce co-CEO Bret Taylor, who chaired Twitter’s board before Elon Musk took over the platform last year. The other members will be former U.S. Treasury Secretary Larry Summers and Quora CEO Adam D’Angelo.

    OpenAI’s previous board, which included D’Angelo, had refused to give specific reasons for why it fired Altman, leading to a weekend of internal conflict at the company and growing outside pressure from the startup’s investors.

    The turmoil also accentuated the differences between Altman — who’s become the face of generative AI’s rapid commercialization since ChatGPT’s arrival a year ago — and members of the company’s board who have expressed deep reservations about the safety risks posed by AI as it gets more advanced.

    “The OpenAI episode shows how fragile the AI ecosystem is right now, including addressing AI’s risks,” said Johann Laux, an expert at the Oxford Internet Institute focusing on human oversight of artificial intelligence.

    Microsoft, which has invested billions of dollars in OpenAI and has rights to its existing technology, quickly moved to hire Altman on Monday, as well as another co-founder and former president, Greg Brockman, who had quit in protest after Altman’s removal.

    That emboldened a threat to resign by hundreds of OpenAI employees, who signed a letter calling for the board’s resignation and Altman’s return. The number of names added up to nearly all of the startup’s 770-plus workers. The AP could not independently confirm that all of the signatures were from OpenAI employees.

    One of the four board members who participated in Altman’s ouster, OpenAI co-founder and chief scientist Ilya Sutskever, later expressed regret and joined the call for the board’s resignation.

    Microsoft in recent days had pledged to welcome all employees who wanted to follow Altman and Brockman to a new AI research unit at the software giant.

    Microsoft CEO Satya Nadella also made clear in a series of interviews Monday that he was open to the possibility of Altman returning to OpenAI as long as the startup’s governance problems were solved.

    “We are encouraged by the changes to the OpenAI board,” Nadella posted on X late Tuesday. “We believe this is a first essential step on a path to more stable, well-informed, and effective governance.”

    In his own post, Altman said that “with the new board and (with) Satya’s support, I’m looking forward to returning to OpenAI, and building on our strong partnership” with Microsoft.

    The leadership drama offers a glimpse into how big tech companies are taking the lead in governing AI and its risks, while governments scramble to catch up.

    In the absence of regulations, with the European Union now working to finalize what’s expected to be the world’s first comprehensive AI rules, “companies decide how a technology is rolled out,” Laux said.

    That might be OK if you believe the risks aren’t worth getting government involved in, but “do we believe that for AI?” he said.

    “Regulation and corporate governance sound very technocratic, but in the end, it’s humans making decisions,” Laux said.

    Their beliefs and preferences about how safe AI is “have a huge influence,” he said, and that’s why it matters so much who’s on a company’s board or has a seat at the table at regulatory bodies.

    Co-founded by Altman as a nonprofit with a mission to safely build so-called artificial general intelligence that outperforms humans and benefits humanity, OpenAI later became a for-profit business — but one still run by its nonprofit board of directors.

    It’s not clear yet if the board’s structure will change with its new members.

    “We are collaborating to figure out the details,” OpenAI posted on X. “Thank you so much for your patience through this.”

    Nadella said Brockman, who was OpenAI’s board chairman until Altman’s firing, also will have a key role to play in ensuring the group “continues to thrive and build on its mission.”

    Hours earlier, Brockman returned to social media as if it were business as usual, touting a feature called ChatGPT Voice that was rolling out to users.

    “Give it a try — totally changes the ChatGPT experience,” Brockman wrote, flagging a post from OpenAI’s main X account that featured a demonstration of the technology and playfully winking at recent turmoil.

    “It’s been a long night for the team, and we’re hungry. How many 16-inch pizzas should I order for 778 people?” the person handling the demonstration asks, using the number of employees at OpenAI.

    ChatGPT’s synthetic voice responded by recommending around 195 pizzas, ensuring everyone gets three slices.

    As for OpenAI’s short-lived interim CEO Emmett Shear, the second temporary leader in the days since Altman’s ouster, he posted on X that he was “deeply pleased by this result, after (tilde)72 very intense hours of work.”

    “Coming into OpenAI, I wasn’t sure what the right path would be,” wrote Shear, the former head of Twitch. “This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution.”

    The AP and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP’s text archives.

    ___

    AP Business Writer Kelvin Chan contributed from London.

    [ad_2]

    Source link

  • OpenAI says ousted CEO Sam Altman to return to company behind ChatGPT

    OpenAI says ousted CEO Sam Altman to return to company behind ChatGPT

    [ad_1]

    The ousted leader of ChatGPT-maker OpenAI is returning to the company that fired him late last week, culminating a days-long power struggle that shocked the tech industry and brought attention to the conflicts around how to safely build artificial intelligence.

    San Francisco-based OpenAI said in a statement late Tuesday: “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board.”

    The board, which replaces the one that fired Altman on Friday, will be led by former Salesforce co-CEO Bret Taylor, who also chaired Twitter’s board before its takeover by Elon Musk last year. The other members will be former U.S. Treasury Secretary Larry Summers and Quora CEO Adam D’Angelo.

    OpenAI’s previous board of directors, which included D’Angelo, had refused to give specific reasons for why it fired Altman, leading to a weekend of internal conflict at the company and growing outside pressure from the startup’s investors.

    The chaos also accentuated the differences between Altman — who’s become the face of generative AI’s rapid commercialization since ChatGPT’s arrival a year ago — and members of the company’s board who have expressed deep reservations about the safety risks posed by AI as it gets more advanced.

    Microsoft, which has invested billions of dollars in OpenAI and has rights to its current technology, quickly moved to hire Altman on Monday, as well as another co-founder and former president, Greg Brockman, who had quit in protest after Altman’s removal. That emboldened a threatened exodus of nearly all of the startup’s 770 employees who signed a letter calling for the board’s resignation and Altman’s return.

    One of the four board members who participated in Altman’s ouster, OpenAI co-founder and chief scientist Ilya Sutskever, later expressed regret and joined the call for the board’s resignation.

    Microsoft in recent days had pledged to welcome all employees who wanted to follow Altman and Brockman to a new AI research unit at the software giant. Microsoft CEO Satya Nadella also made clear in a series of interviews Monday that he was still open to the possibility of Altman returning to OpenAI, so long as the startup’s governance problems are solved.

    “We are encouraged by the changes to the OpenAI board,” Nadella posted on X late Tuesday. “We believe this is a first essential step on a path to more stable, well-informed, and effective governance.”

    In his own post, Altman said that “with the new board and (with) Satya’s support, I’m looking forward to returning to OpenAI, and building on our strong partnership with (Microsoft).”

    Co-founded by Altman as a nonprofit with a mission to safely build so-called artificial general intelligence that outperforms humans and benefits humanity, OpenAI later became a for-profit business but one still run by its nonprofit board of directors. It’s not clear yet if the board’s structure will change with its newly appointed members.

    “We are collaborating to figure out the details,” OpenAI posted on X. “Thank you so much for your patience through this.”

    Nadella said Brockman, who was OpenAI’s board chairman until Altman’s firing, will also have a key role to play in ensuring the group “continues to thrive and build on its mission.”

    Hours earlier, Brockman returned to social media as if it were business as usual, touting a feature called ChatGPT Voice that was rolling out to users.

    “Give it a try — totally changes the ChatGPT experience,” Brockman wrote, flagging a post from OpenAI’s main X account that featured a demonstration of the technology and playfully winking at recent turmoil.

    “It’s been a long night for the team and we’re hungry. How many 16-inch pizzas should I order for 778 people,” the person asks, using the number of people who work at OpenAI. ChatGPT’s synthetic voice responded by recommending around 195 pizzas, ensuring everyone gets three slices.

    As for OpenAI’s short-lived interim CEO Emmett Shear, the second interim CEO in the days since Altman’s ouster, he posted on X that he was “deeply pleased by this result, after (tilde)72 very intense hours of work.”

    “Coming into OpenAI, I wasn’t sure what the right path would be,” wrote Shear, the former head of Twitch. “This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution.”

    [ad_2]

    Source link

  • Nvidia’s revenue triples as AI chip boom continues

    Nvidia’s revenue triples as AI chip boom continues

    [ad_1]

    Nvidia shares moved down 1% in extended trading on Tuesday after the chipmaker reported fiscal third-quarter results that surpassed Wall Street’s predictions. But the company called for a negative impact in the next quarter because of export restrictions affecting sales to organizations in China and other countries.

    “We expect that our sales to these destinations will decline significantly in the fourth quarter of fiscal 2024, though we believe the decline will be more than offset by strong growth in other regions,” Nvidia’s finance chief, Colette Kress, said in a letter to shareholders.

    On a conference call with analysts, Kress said Nvidia is working with some clients in the Middle East and China to obtain U.S. government licenses for sales of high-performance products. Nvidia is trying to develop new data center products that comply with government policies and don’t require licenses, but Kress said she didn’t think they would be meaningful in the fiscal fourth quarter.

    Here’s how the company did, compared to the consensus among analysts surveyed by LSEG, formerly known as Refinitiv:

    • Earnings: $4.02 per share, adjusted, vs. $3.37 per share expected
    • Revenue: $18.12 billion, vs. $16.18 billion expected

    Nvidia’s revenue grew 206% year over year during the quarter ending Oct. 29, according to a statement. Net income, at $9.24 billion, or $3.71 per share, was up from $680 million, or 27 cents per share, in the same quarter a year ago.

    The company’s data center revenue totaled $14.51 billion, up 279% and more than the StreetAccount consensus of $12.97 billion. Half of the data center revenue came from cloud infrastructure providers such as Amazon, and the other from consumer internet entities and large companies, Nvidia said.

    Healthy uptake came from clouds that specialize in renting out GPUs to clients, Kress said on the call.

    The gaming segment contributed $2.86 billion, up 81% and higher than the $2.68 billion StreetAccount consensus.

    With respect to guidance, Nvidia called for $20 billion in revenue for the fiscal fourth quarter. That implies nearly 231% revenue growth.

    During the quarter, Nvidia announced the GH200 GPU, which has more memory than the current H100 and an additional Arm processor onboard. The H100 is expensive and in demand. Nvidia said Australia-based Iris Energy, an owner of bitcoin mining data centers, was buying 248 H100s for $10 million, which works out to about $40,000 each.

    Computing instances based on the GH GPUs are coming soon to Oracle’s cloud, Kress said on the call.

    As recently as two years ago, sales of GPUs for playing video games on PCs were the largest source of Nvidia’s revenue. Now the company gets most revenue from deployments inside server farms.

    The introduction of the ChatGPT chatbot from Microsoft-backed startup OpenAI in 2022 caused many companies to look for ways to add similar generative artificial intelligence capabilities to their software. Demand for Nvidia’s GPUs strengthened as a result.

    Nvidia faces obstacles, including competition from AMD and lower revenue because of export restrictions that can limit sales of its GPUs in China. But ahead of Tuesday report, some analysts were nevertheless optimistic.

    “GPU demand continues to outpace supply as Gen AI adoption broadens across industry verticals,” Raymond James’ Srini Pajjuri and Jacob Silverman wrote in a note Monday to clients, with a “strong buy” recommendation on Nvidia stock. “We are not overly concerned about competition and expect NVDA to maintain >85% share in Gen AI accelerators even in 2024.”

    Nvidia is still working on its plan to grow supply throughout next year, Kress said on the call.

    Excluding the after-hours move, Nvidia stock has gone up 241% so far this year, vastly outperforming the S&P 500 index, which is up 18% over the same period.

    WATCH: The major risk to Nvidia earnings is its relationship with China, says Degas Wright

    [ad_2]

    Source link

  • What you need to know about Emmett Shear, OpenAI’s new interim CEO

    What you need to know about Emmett Shear, OpenAI’s new interim CEO

    [ad_1]

    OpenAI is bringing in the former head of Twitch as interim CEO just days after the company pushed out its well-known leader Sam Altman, sparking upheaval in the AI world.

    Emmett Shear announced his new role Monday morning in a post on X, formerly known as Twitter, while also acknowledging “the process and communications” around Altman’s firing on Friday was “handled very badly” and damaged trust in the artificial intelligence company.

    When it abruptly fired Altman, OpenAI said an internal review found the 38-year-old was “not consistently candid in his communications” with the board of directors. The company did not provide more details, leaving industry analysts and tech watchers reading tea leaves in an effort to figure out what happened.

    Meanwhile, Microsoft, which has invested billions in the AI company, said Monday it’s bringing in Altman and former OpenAI President Greg Brockman – who quit in protest following Altman’s ouster – to lead the tech giant’s new advanced AI research team.

    At OpenAI, Shear has promised to shed some light into Altman’s departure. In his X post, he pledged to hire an independent investigator to look into what led up to Altman’s ouster and write a report within 30 days.

    Shear, 40, is the co-founder of the Amazon-owned streaming platform Twitch, a social media site that’s mostly known for gaming.

    Twitch was originally part of the streaming video site Justin.tv, which was founded by Shear and three other tech entrepreneurs in 2006. The focus shifted toward gaming in 2011, a move that turned the platform into a growing phenomenon and birthed a plethora of well-known streamers. Three years later, Amazon purchased the company for approximately $970 million in cash.

    Twitch doesn’t garner as much media attention as other social media companies, but it’s been the subject of scrutiny during two instances in the past few years when mass shootings in Buffalo, N.Y., and Germany were livestreamed on its platform.

    Shear left the company in March. He said that was due to the birth of his now 9-month-old son.

    After leaving Twitch, Shear became a visiting partner at Y Combinator, a startup incubator that launched Airbnb, DoorDash and Dropbox. Both Altman and Shear know each other as the original batchmates at Y Combinator, where Altman previously served as president.

    In his LinkedIn profile, Shear says he’s been “starting, growing, and running companies since college” and doesn’t “plan to turn back any time soon.” He graduated from Yale University in 2005 with a bachelor’s degree in computer science.

    OpenAI had initially named its chief technology officer, Mira Murati, as interim CEO on Friday. But she appeared to be one of the signatories on a letter that began circulating early Monday – and signed by hundreds of other OpenAI employees – calling for the board’s resignation and Altman’s return.

    The AP was not able to independently confirm that all of the signatures were from OpenAI employees. A spokesperson at OpenAI confirmed that the board has received the letter, which also said the board had replaced Murati against the best interest of the company.

    In his post on X, Shear wrote he received a call offering him a “once-in-a-lifetime opportunity” to become interim CEO at OpenAI. He said the company’s board “shared the situation” with him and asked him to the role. He quickly agreed.

    “I took this job because I believe that OpenAI is one of the most important companies currently in existence,” he wrote.

    Shear said he spent most of Sunday “drinking from the firehose as much as possible,” speaking to the board, employees and a small number of OpenAI’s partners.

    Investors, for their part, are trying to stabilize the situation. Microsoft CEO Satya Nadella weighed in a post on X early Monday morning, saying he was looking “forward to getting to know” the new management team at OpenAI and was “extremely excited” to bring on Altman and Brockman.

    In his post on X, Shear said he checked the reasoning behind the changes at OpenAI before he took the job.

    “The board did (asterisk)not(asterisk) remove Sam over any specific disagreement on safety, their reasoning was completely different from that,” he wrote.

    “I’m not crazy enough to take this job without board support for commercializing our awesome models,” he said, referring to the company’s popular AI tools like ChatGPT and the image generator DALL-E.

    “I have nothing but respect for what Sam and the entire OpenAI team have built,” he said. “It’s not just an incredible research project and software product, but an incredible company. I’m here because I know that, and I want to do everything in my power to protect it and grow it further.”

    Shear said he wants to accomplish three things within the next 30 days.

    In addition to hiring an independent investigator who will “generate a full report” about what happened, Shear said he wants to continue talking to stakeholders and reform the company’s management and leadership teams in light of recent departures.

    After that, he said he “will drive changes in the organization — up to and including pushing strongly for significant governance changes if necessary.”

    ″OpenAI’s stability and success are too important to allow turmoil to disrupt them like this,” he said.

    On a podcast in June, Shear said he’s generally optimistic about technology but has serious concerns about the path of artificial intelligence toward building something “a lot smarter than us” that sets itself on a goal that endangers humans. As an engineer, he said his approach would be to build AI systems at a small and gradual scale.

    “If there is a world where we survive … where we build an AI that’s smarter than humans and survive it, it’s going to be because we built smaller AIs than that, and we actually had as many smart people as we can working on that, and taking the problem seriously,” Shear said in June.

    Asked by an X user on Monday what his stance was on AI safety, Shear replied: “It’s important.”

    __

    AP reporter Matt O’Brien contributed to this report from Providence, Rhode Island.

    [ad_2]

    Source link

  • ‘Damage control’: Tech industry reacts to a chaotic weekend for OpenAI and Microsoft

    ‘Damage control’: Tech industry reacts to a chaotic weekend for OpenAI and Microsoft

    [ad_1]

    OpenAI CEO, Sam Altman & and Microsoft CEO, Satya Nadella.

    Hayden Field | CNBC

    The past few days have been chaotic for the AI industry, with technology experts weighing what this could mean for the nascent sector and some of its key players.

    OpenAI, the company behind ChatGPT which launched artificial intelligence into the mainstream late last year, said Friday that it was removing its CEO Sam Altman and making its technology chief Mira Murati interim chief executive in his place.

    But before the weekend was even over, OpenAI appeared to change course, announcing that former Twitch chief Emmett Shear would take over from Altman instead, at least on a temporary basis.

    Meanwhile, Altman himself has already found a fresh role leading a new advanced AI research team at Microsoft, where he will be joined by former OpenAI Board Chair Greg Brockman and several other employees.

    But Altman’s move could simply be a case of “damage control” for Microsoft, according to Richard Windsor, founder of digital research company Radio Free Mobile. This is linked to Microsoft’s immense investments in OpenAI, he said Monday on CNBC’s “Street Signs Europe.”

    Microsoft did not immediately respond to CNBC’s request for comment on the statement.

    Microsoft began investing in OpenAI as early as 2019, initially with around $1 billion. That figure has ballooned since to an amount reported to be closer to $13 billion. Microsoft has also integrated OpenAI’s technologies in products like search engine Bing and various other software.

    “A large amount of that value is tied up in the founders and in the engineers that are inside the company,” Windsor said.

    Rishi Jaluria, managing director for software equity research at RBC Capital Markets, told CNBC’s “Street Signs Asia” on Monday that Altman aligns with Microsoft’s AI vision.

    “The vison that Sam Altman has is kind of the vision Microsoft wants,” including commercializing and “having responsible AI but not handcuffing AI,” he said.

    Meanwhile, other tech experts have been backing Microsoft CEO Satya Nadella‘s swift move to hire Altman in-house.

    The four-person board at OpenAI “was at the kids poker table and thought they won until Nadella and Microsoft took this all over in a World Series of Poker move for the ages with the Valley and Wall Street watching with white knuckles Sunday night/Monday early am,” Wedbush Securities tech analyst Dan Ives wrote in a note published Monday.

    “We view Microsoft now even in a STRONGER position from an AI perspective with Altman and Brockman at MSFT running AI,” he added.

    Aaron Levie, CEO of cloud sharing and management company Box, said via X, formerly known as Twitter, that it was “incredible execution by Satya in one of the most dynamic situations in tech history.”

    Aviral Bhatnagar, an investor at Venture Highway, had a similar view.

    “You now understand why Satya Nadella is one of the greatest tech CEOs of this generation,” he said in a post on X.

    “Kept Altman in the fold, kept the transition as neat as possible, managed the chaos and the wild board decision making, didn’t destroy OpenAI. What a boss move.”

    OpenAI’s future

    Windsor suggested that further OpenAI employees may soon follow Altman to Microsoft, which he said could have detrimental consequences for OpenAI. This could even include OpenAI tech chief Murati who has been crucial in developing OpenAI’s products, he noted.

    “If she goes off with Sam and the others to join Microsoft, what’s left of OpenAI? Arguably not much,” Windsor said.

    Several OpenAI employees have also shared comments on X, often referencing that people are crucial for the company.

    The relationship between OpenAI and Microsoft could also shift due to the developments, Jaluria said.

    “The OpenAI relationship is absolutely critical to Microsoft and I think a lot of us were surprised that even after all the investment, Microsoft did not have a board seat. And I wouldn’t be surprised if coming out of this, Microsoft wants to have more of a say in this and control more of the destiny because absolutely their fortunes in AI are tied to OpenAI,” he explained.

    “I do think that there are going to be some changes coming out of this, but ultimately Microsoft and OpenAI will be very important partners going forward,” he added.

    ‘Handled very badly’

    The chaotic developments have also been criticized by Shear himself, the new interim CEO of OpenAI.

    “It’s clear that the process and communications around Sam’s removal has been handled very badly, which has seriously damaged our trust,” he said in a post on X, in which he also confirmed he would step in as interim CEO.

    Shear suggested he would launch an investigation to examine the process that led to the recent events and produce a report on them within his first thirty days at OpenAI.

    This has been echoed by experts, including Windsor, who said that the situation could severely damage the company’s reputation and undermine public confidence in the company.

    Meanwhile Wedbush Securities’ Ives called the weekend’s developments a “circus clown show,” and described it as a “coup attempt” which elevated Shear to interim CEO “in a move that will forever be viewed as a tainted move by OpenAI that caused chaos internally and externally.”

    Elsewhere Nathan Benaich, general partner of Air Street Capital, added that the events showed “that no one is immune from the laws of corporate physics,” and “one bad decision” can have immense consequences.

    “Considering Sam’s centrality to OpenAI’s vision and the personal loyalty he commands, this is the most baffling decision from an AI lab I’ve ever witnessed,” he said.

    Microsoft's relationship with OpenAI is 'absolutely critical': RBC Capital Markets

    [ad_2]

    Source link

  • Microsoft hires Sam Altman as OpenAI’s new CEO vows to investigate his firing

    Microsoft hires Sam Altman as OpenAI’s new CEO vows to investigate his firing

    [ad_1]

    LONDON — The new head of ChatGPT maker OpenAI said Monday that he would launch an investigation into the firing of co-founder Sam Altman, a shakeup that shocked the artificial intelligence world and led to Microsoft snapping up the ousted CEO for a new AI venture.

    The developments come after a weekend of drama and speculation about how the power dynamics would shake out at OpenAI, whose chatbot kicked off the generative AI era by producing human-like text, images, video and music.

    It ended with former Twitch leader Emmett Shear taking over as OpenAI’s interim chief executive and Microsoft announcing it was hiring Altman and OpenAI co-founder and former President Greg Brockman to lead Microsoft’s new advanced AI research team.

    Despite the rift between the key players behind ChatGPT and the company they helped build, both Shear and Microsoft Chairman and CEO Satya Nadella tweeted that they are committed to their partnership.

    Microsoft invested billions of dollars in the startup and helped provide the computing power to run its AI systems. Nadella wrote on X, formerly known as Twitter, that he was “extremely excited” to bring on the former executives of OpenAI and looked “forward to getting to know” Shear and the rest of the management team.

    In reply on X, Altman said “the mission continues,” while Brockman posted, “We are going to build something new & it will be incredible.”

    OpenAI said Friday that Altman was pushed out after a review found he was “not consistently candid in his communications” with the board of directors, which had lost confidence in his ability to lead the company.

    In a post Monday on X, Shear said he would hire an independent investigator to look into what led up to Altman’s ouster and write a report within 30 days.

    “It’s clear that the process and communications around Sam’s removal has been handled very badly, which has seriously damaged our trust,” wrote Shear, who co-founded Twitch, an Amazon-owned livestreaming service popular with video gamers.

    He said he also plans in the next month to “reform the management and leadership team in light of recent departures into an effective force” and speak with employees, investors and customers.

    After that, Shear said he would “drive changes in the organization,” including “significant governance changes if necessary.” He noted that the reason behind the board removing Altman was not a “specific disagreement on safety.”

    OpenAI last week declined to answer questions on what Altman’s alleged lack of candor was about. Its statement said his behavior was hindering the board’s ability to exercise its responsibilities.

    An OpenAI spokeswoman didn’t immediately reply to an email Monday seeking comment. A Microsoft representative said the company would not be commenting beyond its CEO’s statement.

    After Altman was pushed out Friday, he stirred speculation that he might be coming back into the fold in a series of tweets. He posted a photo of himself with an OpenAI guest pass on Sunday, saying this is “first and last time i ever wear one of these.”

    Hours earlier, he tweeted, “i love the openai team so much,” which drew heart replies from Brockman, who quit after Altman was fired, and Mira Murati, OpenAI’s chief technology officer who was initially named as interim CEO.

    It’s not clear what transpired between the announcement of Murati’s interim role Friday and Shear’s hiring, though she was among the employees on Monday who tweeted, “OpenAI is nothing without its people.” Altman replied to many with heart emojis.

    Shear said he stepped down as Twitch CEO because of the birth of his now-9-month-old son but “took this job because I believe that OpenAI is one of the most important companies currently in existence.”

    “Ultimately I felt that I had a duty to help if I could,” he tweeted.

    Altman had helped catapult ChatGPT to global fame and in the past year has become Silicon Valley’s most sought-after voice on the promise and potential dangers of artificial intelligence.

    He went on a world tour to meet with government officials earlier this year, drawing big crowds at public events as he discussed both the risks of AI and attempts to regulate the emerging technology.

    Altman posted Friday on X that “i loved my time at openai” and later called what happened a “weird experience.”

    “If Microsoft lost Altman he could have gone to Amazon, Google, Apple, or a host of other tech companies craving to get the face of AI globally in their doors,” Daniel Ives, an analyst with Wedbush Securities, said in a research note.

    Microsoft is now in an even stronger position on AI, Ives said.

    Shares of Microsoft Corp. rose nearly 2% before the opening bell and were nearing an all-time high Monday.

    The Associated Press and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP’s text archives.

    ___

    AP writer Brian P. D. Hannon contributed from Bangkok.

    [ad_2]

    Source link

  • ‘Please regulate AI:’ Artists push for U.S. copyright reforms but tech industry says not so fast

    ‘Please regulate AI:’ Artists push for U.S. copyright reforms but tech industry says not so fast

    [ad_1]

    Country singers, romance novelists, video game artists and voice actors are appealing to the U.S. government for relief — as soon as possible — from the threat that artificial intelligence poses to their livelihoods.

    “Please regulate AI. I’m scared,” wrote a podcaster concerned about his voice being replicated by AI in one of thousands of letters recently submitted to the U.S. Copyright Office.

    Technology companies, by contrast, are largely happy with the status quo that has enabled them to gobble up published works to make their AI systems better at mimicking what humans do.

    The nation’s top copyright official hasn’t yet taken sides. She told The Associated Press she’s listening to everyone as her office weighs whether copyright reforms are needed for a new era of generative AI tools that can spit out compelling imagery, music, video and passages of text.

    “We’ve received close to 10,000 comments,” said Shira Perlmutter, the U.S. register of copyrights, in an interview. “Every one of them is being read by a human being, not a computer. And I myself am reading a large part of them.”

    WHAT’S AT STAKE?

    Perlmutter directs the U.S. Copyright Office, which registered more than 480,000 copyrights last year covering millions of individual works but is increasingly being asked to register works that are AI-generated. So far, copyright claims for fully machine-generated content have been soundly rejected because copyright laws are designed to protect works of human authorship.

    But, Perlmutter asks, as humans feed content into AI systems and give instructions to influence what comes out, “is there a point at which there’s enough human involvement in controlling the expressive elements of the output that the human can be considered to have contributed authorship?”

    That’s one question the Copyright Office has put to the public. A bigger one — the question that’s fielded thousands of comments from creative professions — is what to do about copyrighted human works that are being pulled from the internet and other sources and ingested to train AI systems, often without permission or compensation.

    More than 9,700 comments were sent to the Copyright Office, part of the Library of Congress, before an initial comment period closed in late October. Another round of comments is due by Dec. 6. After that, Perlmutter’s office will work to advise Congress and others on whether reforms are needed.

    WHAT ARE ARTISTS SAYING?

    Addressing the “Ladies and Gentlemen of the US Copyright Office,” the “Family Ties” actor and filmmaker Justine Bateman said she was disturbed that AI models were “ingesting 100 years of film” and TV in a way that could destroy the structure of the film business and replace large portions of its labor pipeline.

    It “appears to many of us to be the largest copyright violation in the history of the United States,” Bateman wrote. “I sincerely hope you can stop this practice of thievery.”

    Airing some of the same AI concerns that fueled this year’s Hollywood strikes, television showrunner Lilla Zuckerman (“Poker Face”) said her industry should declare war on what is “nothing more than a plagiarism machine” before Hollywood is “coopted by greedy and craven companies who want to take human talent out of entertainment.”

    The music industry is also threatened, said Nashville-based country songwriter Marc Beeson, who’s penned tunes for Carrie Underwood and Garth Brooks. Beeson said AI has potential to do good but “in some ways, it’s like a gun — in the wrong hands, with no parameters in place for its use, it could do irreparable damage to one of the last true American art forms.”

    While most commenters were individuals, their concerns were echoed by big music publishers (Universal Music Group called the way AI is trained “ravenous and poorly controlled”) as well as author groups and news organizations including the New York Times and The Associated Press.

    IS IT FAIR USE?

    What leading tech companies like Google, Microsoft and ChatGPT-maker OpenAI are telling the Copyright Office is that their training of AI models fits into the “fair use” doctrine that allows for limited uses of copyrighted materials such as for teaching, research or transforming the copyrighted work into something different.

    “The American AI industry is built in part on the understanding that the Copyright Act does not proscribe the use of copyrighted material to train Generative AI models,” says a letter from Meta Platforms, the parent company of Facebook, Instagram and WhatsApp. The purpose of AI training is to identify patterns “across a broad body of content,” not to “extract or reproduce” individual works, it added.

    So far, courts have largely sided with tech companies in interpreting how copyright laws should treat AI systems. In a defeat for visual artists, a federal judge in San Francisco last month dismissed much of the first big lawsuit against AI image-generators, though allowed some of the case to proceed.

    Most tech companies cite as precedent Google’s success in beating back legal challenges to its online book library. The U.S. Supreme Court in 2016 let stand lower court rulings that rejected authors’ claim that Google’s digitizing of millions of books and showing snippets of them to the public amounted to copyright infringement.

    But that’s a flawed comparison, argued former law professor and bestselling romance author Heidi Bond, who writes under the pen name Courtney Milan. Bond said she agrees that “fair use encompasses the right to learn from books,” but Google Books obtained legitimate copies held by libraries and institutions, whereas many AI developers are scraping works of writing through “outright piracy.”

    Perlmutter said this is what the Copyright Office is trying to help sort out.

    “Certainly this differs in some respects from the Google situation,” Perlmutter said. “Whether it differs enough to rule out the fair use defense is the question in hand.”

    [ad_2]

    Source link

  • ‘Please regulate AI:’ Artists push for U.S. copyright reforms but tech industry says not so fast

    ‘Please regulate AI:’ Artists push for U.S. copyright reforms but tech industry says not so fast

    [ad_1]

    Country singers, romance novelists, video game artists and voice actors are appealing to the U.S. government for relief — as soon as possible — from the threat that artificial intelligence poses to their livelihoods.

    “Please regulate AI. I’m scared,” wrote a podcaster concerned about his voice being replicated by AI in one of thousands of letters recently submitted to the U.S. Copyright Office.

    Technology companies, by contrast, are largely happy with the status quo that has enabled them to gobble up published works to make their AI systems better at mimicking what humans do.

    The nation’s top copyright official hasn’t yet taken sides. She told The Associated Press she’s listening to everyone as her office weighs whether copyright reforms are needed for a new era of generative AI tools that can spit out compelling imagery, music, video and passages of text.

    “We’ve received close to 10,000 comments,” said Shira Perlmutter, the U.S. register of copyrights, in an interview. “Every one of them is being read by a human being, not a computer. And I myself am reading a large part of them.”

    WHAT’S AT STAKE?

    Perlmutter directs the U.S. Copyright Office, which registered more than 480,000 copyrights last year covering millions of individual works but is increasingly being asked to register works that are AI-generated. So far, copyright claims for fully machine-generated content have been soundly rejected because copyright laws are designed to protect works of human authorship.

    But, Perlmutter asks, as humans feed content into AI systems and give instructions to influence what comes out, “is there a point at which there’s enough human involvement in controlling the expressive elements of the output that the human can be considered to have contributed authorship?”

    That’s one question the Copyright Office has put to the public. A bigger one — the question that’s fielded thousands of comments from creative professions — is what to do about copyrighted human works that are being pulled from the internet and other sources and ingested to train AI systems, often without permission or compensation.

    More than 9,700 comments were sent to the Copyright Office, part of the Library of Congress, before an initial comment period closed in late October. Another round of comments is due by Dec. 6. After that, Perlmutter’s office will work to advise Congress and others on whether reforms are needed.

    WHAT ARE ARTISTS SAYING?

    Addressing the “Ladies and Gentlemen of the US Copyright Office,” the “Family Ties” actor and filmmaker Justine Bateman said she was disturbed that AI models were “ingesting 100 years of film” and TV in a way that could destroy the structure of the film business and replace large portions of its labor pipeline.

    It “appears to many of us to be the largest copyright violation in the history of the United States,” Bateman wrote. “I sincerely hope you can stop this practice of thievery.”

    Airing some of the same AI concerns that fueled this year’s Hollywood strikes, television showrunner Lilla Zuckerman (“Poker Face”) said her industry should declare war on what is “nothing more than a plagiarism machine” before Hollywood is “coopted by greedy and craven companies who want to take human talent out of entertainment.”

    The music industry is also threatened, said Nashville-based country songwriter Marc Beeson, who’s penned tunes for Carrie Underwood and Garth Brooks. Beeson said AI has potential to do good but “in some ways, it’s like a gun — in the wrong hands, with no parameters in place for its use, it could do irreparable damage to one of the last true American art forms.”

    While most commenters were individuals, their concerns were echoed by big music publishers (Universal Music Group called the way AI is trained “ravenous and poorly controlled”) as well as author groups and news organizations including the New York Times and The Associated Press.

    IS IT FAIR USE?

    What leading tech companies like Google, Microsoft and ChatGPT-maker OpenAI are telling the Copyright Office is that their training of AI models fits into the “fair use” doctrine that allows for limited uses of copyrighted materials such as for teaching, research or transforming the copyrighted work into something different.

    “The American AI industry is built in part on the understanding that the Copyright Act does not proscribe the use of copyrighted material to train Generative AI models,” says a letter from Meta Platforms, the parent company of Facebook, Instagram and WhatsApp. The purpose of AI training is to identify patterns “across a broad body of content,” not to “extract or reproduce” individual works, it added.

    So far, courts have largely sided with tech companies in interpreting how copyright laws should treat AI systems. In a defeat for visual artists, a federal judge in San Francisco last month dismissed much of the first big lawsuit against AI image-generators, though allowed some of the case to proceed.

    Most tech companies cite as precedent Google’s success in beating back legal challenges to its online book library. The U.S. Supreme Court in 2016 let stand lower court rulings that rejected authors’ claim that Google’s digitizing of millions of books and showing snippets of them to the public amounted to copyright infringement.

    But that’s a flawed comparison, argued former law professor and bestselling romance author Heidi Bond, who writes under the pen name Courtney Milan. Bond said she agrees that “fair use encompasses the right to learn from books,” but Google Books obtained legitimate copies held by libraries and institutions, whereas many AI developers are scraping works of writing through “outright piracy.”

    Perlmutter said this is what the Copyright Office is trying to help sort out.

    “Certainly this differs in some respects from the Google situation,” Perlmutter said. “Whether it differs enough to rule out the fair use defense is the question in hand.”

    [ad_2]

    Source link

  • ChatGPT-maker OpenAI fires CEO Sam Altman, the face of the AI boom, for lack of candor with company

    ChatGPT-maker OpenAI fires CEO Sam Altman, the face of the AI boom, for lack of candor with company

    [ad_1]

    ChatGPT-maker Open AI said Friday it has pushed out its co-founder and CEO Sam Altman after a review found he was “not consistently candid in his communications” with the board of directors.

    “The board no longer has confidence in his ability to continue leading OpenAI,” the artificial intelligence company said in a statement.

    In the year since Altman catapulted ChatGPT to global fame, he has become Silicon Valley’s sought-after voice on the promise and potential dangers of artificial intelligence and his sudden and mostly unexplained exit brought uncertainty to the industry’s future.

    Mira Murati, OpenAI’s chief technology officer, will take over as interim CEO effective immediately, the company said, while it searches for a permanent replacement.

    The announcement also said another OpenAI co-founder and top executive, Greg Brockman, the board’s chairman, would step down from that role but remain at the company, where he serves as president. But later on X, formerly Twitter, Brockman posted a message he sent to OpenAI employees in which he wrote, “based on today’s news, i quit.”

    In another X post on Friday night, Brockman said Altman was asked to join a video meeting at noon Friday with the company’s board members, minus Brockman, during which OpenAI co-founder and Chief Scientist Ilya Sutskever informed Altman he was being fired.

    “Sam and I are shocked and saddened by what the board did today,” Brockman wrote, adding that he was informed of his removal from the board in a separate call with Sutskever a short time later.

    OpenAI declined to answer questions on what Altman’s alleged lack of candor was about. The statement said his behavior was hindering the board’s ability to exercise its responsibilities.

    Altman posted Friday on X: “i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. will have more to say about what’s next later.”

    The Associated Press and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP’s text archives.

    Altman helped start OpenAI as a nonprofit research laboratory in 2015. But it was ChatGPT’s explosion into public consciousness that thrust Altman into the spotlight as a face of generative AI — technology that can produce novel imagery, passages of text and other media. On a world tour this year, he was mobbed by a crowd of adoring fans at an event in London.

    He’s sat with multiple heads of state to discuss AI’s potential and perils. Just Thursday, he took part in a CEO summit at the Asia-Pacific Economic Cooperation conference in San Francisco, where OpenAI is based.

    He predicted AI will prove to be “the greatest leap forward of any of the big technological revolutions we’ve had so far.” He also acknowledged the need for guardrails, calling attention to the existential dangers future AI could pose.

    Some computer scientists have criticized that focus on far-off risks as distracting from the real-world limitations and harms of current AI products. The U.S. Federal Trade Commission has launched an investigation into whether OpenAI violated consumer protection laws by scraping public data and publishing false information through its chatbot.

    The company said its board consists of OpenAI’s chief scientist, Ilya Sutskever, and three non-employees: Quora CEO Adam D’Angelo, tech entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology.

    OpenAI’s key business partner, Microsoft, which has invested billions of dollars into the startup and helped provide the computing power to run its AI systems, said that the transition won’t affect its relationship.

    “We have a long-term partnership with OpenAI and Microsoft remains committed to Mira and their team as we bring this next era of AI to our customers,” said an emailed Microsoft statement.

    While not trained as an AI engineer, Altman, now 38, has been seen as a Silicon Valley wunderkind since his early 20s. He was recruited in 2014 to take lead of the startup incubator YCombinator.

    “Sam is one of the smartest people I know, and understands startups better than perhaps anyone I know, including myself,” read YCombinator co-founder Paul Graham’s 2014 announcement that Altman would become its president. Graham said at the time that Altman was “one of those rare people who manage to be both fearsomely effective and yet fundamentally benevolent.”

    OpenAI started out as a nonprofit when it launched with financial backing from Tesla CEO Elon Musk and others. Its stated aims were to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

    That changed in 2018 when it incorporated a for-profit business Open AI LP, and shifted nearly all its staff into the business, not long after releasing its first generation of the GPT large language model for mimicking human writing. Around the same time, Musk, who had co-chaired its board with Altman, resigned from the board in a move that OpenAI said would eliminate a “potential future conflict for Elon” due to Tesla’s work on building self-driving systems.

    While OpenAI’s board has preserved its nonprofit governance structure, the startup it oversees has increasingly sought to capitalize on its technology by tailoring its popular chatbot to business customers.

    At its first developer conference last week, Altman was the main speaker showcasing a vision for a future of AI agents that could help people with a variety of tasks. Days later, he announced the company would have to pause new subscriptions to its premium version of ChatGPT because it had exceeded capacity.

    Altman’s exit “is indeed shocking as he has been the face of” generative AI technology, said Gartner analyst Arun Chandrasekaran.

    He said OpenAI still has a “deep bench of technical leaders” but its next executives will have to steer it through the challenges of scaling the business and meeting the expectations of regulators and society.

    Forrester analyst Rowan Curran speculated that Altman’s departure, “while sudden,” did not likely reflect deeper business problems.

    “This seems to be a case of an executive transition that was about issues with the individual in question, and not with the underlying technology or business,” Curran said.

    Altman has a number of possible next steps. Even while running OpenAI, he placed large bets on several other ambitious projects.

    Among them are Helion Energy, for developing fusion reactors that could produce prodigious amounts of energy from the hydrogen in seawater, and Retro Biosciences, which aims to add 10 years to the human lifespan using biotechnology. Altman also co-founded Worldcoin, a biometric and cryptocurrency project that’s been scanning people’s eyeballs with the goal of creating a vast digital identity and financial network.

    ___

    Associated Press business writers Haleluya Hadero in New York, Kelvin Chan in London and Michael Liedtke and David Hamilton in San Francisco contributed to this report.

    [ad_2]

    Source link

  • OpenAI’s Sam Altman exits as CEO because ‘board no longer has confidence’ in his ability to lead

    OpenAI’s Sam Altman exits as CEO because ‘board no longer has confidence’ in his ability to lead

    [ad_1]

    Sam Altman, Chief Executive Officer of OpenAI, and Mira Murati, Chief Technology Officer of OpenAI, speak during The Wall Street Journal’s WSJ Tech Live Conference in Laguna Beach, California on October 17, 2023. 

    Patrick T. Fallon | Afp | Getty Images

    OpenAI’s board of directors said Friday that Sam Altman will step down as CEO and will be replaced on an interim basis by technology chief Mira Murati.

    The company said it conducted “a deliberative review process” and “concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

    “The board no longer has confidence in his ability to continue leading OpenAI,” the statement said.

    OpenAI’s board includes chief scientist Ilya Sutskever and independent directors such as Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology. OpenAI says the board of its 501(c)(3) is the “overall governing body for all OpenAI activities.”

    The board also said that Greg Brockman, OpenAI’s president, “will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.”

    Sam Altman acknowledged that he was leaving OpenAI in a post on X on Friday, but did not mention any accusations by the firm’s board that he failed to be candid during unspecified reviews. He said he loved working at the company and that he would talk more about “what’s next later.”

    Regarding the appointment of Murati, OpenAI said, “As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”

    OpenAI, which has raised billions of dollars from Microsoft and ranked first on CNBC’s Disruptor 50 list this year, jumped into the mainstream in late 2022 after releasing its AI chatbot ChatGPT to the public. The service went viral by allowing users to convert simple text into creative conversation and has pushed big tech companies such as Alphabet and Meta to step up their investments in generative AI.

    Microsoft CEO Satya Nadella (R) speaks as OpenAI CEO Sam Altman (L) looks on during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first ever Open AI DevDay conference. 

    Justin Sullivan | Getty Images

    Microsoft shares slipped after the announcement, closing the day down 1.7% at $369.84.

    A Microsoft spokesperson said in a statement that the company has “a long-term partnership with OpenAI and Microsoft remains committed to Mira and their team as we bring this next era of AI to our customers.”

    In a post on X, Microsoft CEO Satya Nadella commented about his company’s “long-term agreement with OpenAI,” explaining that it would “remain committed to our partnership, and to Mira and the team.” Nadella did not address Altman’s departure.

    Brockman also shared a post on X that included the message he sent to his former OpenAI colleagues, informing them that he “quit” after he learned about “today’s news.”

    Later in the evening, Brockman said in an X post that both he and Altman were “shocked and saddened by what the board did today.”

    Sutskever instigated a virtual meeting with Altman that the rest of the OpenAI board attended except Brockman, the now-former OpenAI president claimed in the X post. It was during this meeting that Sutskever allegedly fired Altman, telling him “that the news was going out very soon,” Brockman wrote.

    Less than half-an-hour later, Brockman claimed to have received a text message from Sutskever, in which the chief scientist summoned for another virtual meeting. At this meeting, Brockman said he learned of Altman’s firing and that he was being removed from OpenAI’s board, but was assured to be “vital to the company and would retain his role.”

    “As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior,” Brockman said in the post, which was quickly followed by a separate message from Altman.

    Altman said in an X post that “today was a weird experience in many ways. but one unexpected one is that it has been sorta like reading your own eulogy while you’re still alive.”

    “if i start going off, the openai board should go after me for the full value of my shares,” Altman said in another X post.

    OpenAI debuted in 2015 as a nonprofit and employed Sutskever as research director and Brockman as chief technology officer. The firm’s original investors included several prominent Silicon Valley luminaries like Altman, LinkedIn co-founder Reid Hoffman and Tesla CEO Elon Musk, who reportedly committed $1 billion to the project.

    Before taking over as CEO, Altman, 38, was president of startup accelerator Y Combinator and gained prominence in Silicon Valley as an early-stage investor. Earlier in his career, he started the social networking company Loopt.

    As OpenAI’s popularity grew this year alongside ChatGPT, so too did Altman’s profile. He became an ambassador of sorts, representing the ballooning AI industry across the globe.

    Altman’s big year as OpenAI’s CEO

    In September, Indonesia awarded Altman the so-called “Golden Visa,” providing him with 10 years worth of various travel accommodations and perks intended to help the country gain more foreign investors.

    Altman visited several Asia-Pacific countries over the summer including Singapore, India, China, South Korea and Japan, meeting with government leaders and officials and giving public speeches on the rise of AI and the need for regulations.

    The technologist testified before the U.S. Senate in May, calling on lawmakers to regulate AI, citing the technology’s potential to have a negative impact on the job market, the information ecosystem, and other societal and economic concerns.

    “I think if this technology goes wrong, it can go quite wrong,” Altman said at the time. “And we want to be vocal about that. We want to work with the government to prevent that from happening.”

    In a prelude to his Senate testimony, Altman also spoke at a dinner with roughly 60 lawmakers, who were reportedly wowed by his speech and demonstrations.

    Open AI’s CEO Sam Altman testifies at an oversight hearing by the Senate Judiciaryâs Subcommittee on Privacy, Technology, and the Law to examine A.I., focusing on rules for artificial intelligence in Washington, DC on May 16th, 2023. 

    Nathan Posner | Anadolu Agency | Getty Images

    “It’s not easy to keep members of Congress rapt for close to two hours,” said Rep. Ted Lieu, D-Calif., vice chair of the House Democratic Caucus, who co-hosted the dinner with GOP Conference Vice Chair Mike Johnson, R-La., now House speaker. “So Sam Altman was very informative and provided a lot of information.”

    More recently, Altman spoke this week at the Asia-Pacific Economic Cooperation conference in San Francisco, along with various technology executives and world leaders including U.S. President Joe Biden and Chinese President Xi Jinping.

    OpenAI held its first developer conference in early November, underscoring the startup’s rising popularity in the technology industry. Microsoft CEO Satya Nadella made a surprise guest appearance during the event, joining Altman on stage to discuss the startup’s AI technologies and its partnership with Microsoft.

    Altman didn’t immediately respond to a request for more information.

    — CNBC’s Lora Kolodny contributed to this report.

    WATCH: OpenAI says Altman exiting as CEO

    [ad_2]

    Source link

  • ChatGPT-maker OpenAI fires CEO Sam Altman, the face of the AI boom, for lack of candor with company

    ChatGPT-maker OpenAI fires CEO Sam Altman, the face of the AI boom, for lack of candor with company

    [ad_1]

    ChatGPT-maker Open AI said Friday it has pushed out its co-founder and CEO Sam Altman after a review found he was “not consistently candid in his communications” with the board of directors.

    “The board no longer has confidence in his ability to continue leading OpenAI,” the artificial intelligence company said in a statement.

    In the year since Altman catapulted ChatGPT to global fame, he has become Silicon Valley’s sought-after voice on the promise and potential dangers of artificial intelligence and his sudden and mostly unexplained exit brought uncertainty to the industry’s future.

    Mira Murati, OpenAI’s chief technology officer, will take over as interim CEO effective immediately, the company said, while it searches for a permanent replacement.

    The announcement also said another OpenAI co-founder and top executive, Greg Brockman, the board’s chairman, would be stepping down from that role but remain at the company, where he serves as president. But later on X, formerly Twitter, Brockman wrote, “based on today’s news, i quit.”

    OpenAI declined to answer questions on what Altman’s alleged lack of candor was about. The statement said his behavior was hindering the board’s ability to exercise its responsibilities.

    Altman posted Friday on X: “i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. will have more to say about what’s next later.”

    The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

    Altman helped start OpenAI as a nonprofit research laboratory in 2015. But it was ChatGPT’s explosion into public consciousness that thrust Altman into the spotlight as a face of generative AI — technology that can produce novel imagery, passages of text and other media. On a world tour this year, he was mobbed by a crowd of adoring fans at an event in London.

    He’s sat with multiple heads of state to discuss AI’s potential and perils. Just Thursday, he took part in a CEO summit at the Asia-Pacific Economic Cooperation conference in San Francisco, where OpenAI is based.

    He predicted AI will prove to be “the greatest leap forward of any of the big technological revolutions we’ve had so far.” He also acknowledged the need for guardrails, calling attention to the existential dangers future AI could pose.

    Some computer scientists have criticized that focus on far-off risks as distracting from the real-world limitations and harms of current AI products. The U.S. Federal Trade Commission has launched an investigation into whether OpenAI violated consumer protection laws by scraping public data and publishing false information through its chatbot.

    The company said its board consists of OpenAI’s chief scientist, Ilya Sutskever, and three non-employees: Quora CEO Adam D’Angelo, tech entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology.

    OpenAI’s key business partner, Microsoft, which has invested billions of dollars into the startup and helped provide the computing power to run its AI systems, said that the transition won’t affect its relationship.

    “We have a long-term partnership with OpenAI and Microsoft remains committed to Mira and their team as we bring this next era of AI to our customers,” said an emailed Microsoft statement.

    While not trained as an AI engineer, Altman, now 38, has been seen as a Silicon Valley wunderkind since his early 20s. He was recruited in 2014 to take lead of the startup incubator YCombinator.

    “Sam is one of the smartest people I know, and understands startups better than perhaps anyone I know, including myself,” read YCombinator co-founder Paul Graham’s 2014 announcement that Altman would become its president. Graham said at the time that Altman was “one of those rare people who manage to be both fearsomely effective and yet fundamentally benevolent.”

    OpenAI started out as a nonprofit when it launched with financial backing from Tesla CEO Elon Musk and others. Its stated aims were to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

    That changed in 2018 when it incorporated a for-profit business Open AI LP, and shifted nearly all its staff into the business, not long after releasing its first generation of the GPT large language model for mimicking human writing. Around the same time, Musk, who had co-chaired its board with Altman, resigned from the board in a move that OpenAI said would eliminate a “potential future conflict for Elon” due to Tesla’s work on building self-driving systems.

    While OpenAI’s board has preserved its nonprofit governance structure, the startup it oversees has increasingly sought to capitalize on its technology by tailoring its popular chatbot to business customers.

    At its first developer conference last week, Altman was the main speaker showcasing a vision for a future of AI agents that could help people with a variety of tasks. Days later, he announced the company would have to pause new subscriptions to its premium version of ChatGPT because it had exceeded capacity.

    Altman’s exit “is indeed shocking as he has been the face of” generative AI technology, said Gartner analyst Arun Chandrasekaran.

    He said OpenAI still has a “deep bench of technical leaders” but its next executives will have to steer it through the challenges of scaling the business and meeting the expectations of regulators and society.

    Forrester analyst Rowan Curran speculated that Altman’s departure, “while sudden,” did not likely reflect deeper business problems.

    “This seems to be a case of an executive transition that was about issues with the individual in question, and not with the underlying technology or business,” Curran said.

    Altman has a number of possible next steps. Even while running OpenAI, he placed large bets on several other ambitious projects.

    Among them are Helion Energy, for developing fusion reactors that could produce prodigious amounts of energy from the hydrogen in seawater, and Retro Biosciences, which aims to add 10 years to the human lifespan using biotechnology. Altman also co-founded Worldcoin, a biometric and cryptocurrency project that’s been scanning people’s eyeballs with the goal of creating a vast digital identity and financial network.

    ___

    Associated Press business writers Haleluya Hadero in New York, Kelvin Chan in London, and Michael Liedtke and David Hamilton in San Francisco contributed to this report.

    [ad_2]

    Source link

  • OpenAI CEO Sam Altman steps down as board loses confidence in his leadership

    OpenAI CEO Sam Altman steps down as board loses confidence in his leadership

    [ad_1]

    OpenAI said Friday that Sam Altman is no longer its chief executive, with the ChatGPT parent adding that said Altman had not been “consistently candid in his communications with the board.”

    “The board no longer has confidence in his ability to continue leading OpenAI,” the company said in a blog post.

    In a tweet Friday, Altman said he “will…

    [ad_2]

    Source link

  • Robotics Q&A: CMU’s Matthew Johnson-Roberson | TechCrunch

    Robotics Q&A: CMU’s Matthew Johnson-Roberson | TechCrunch

    [ad_1]

    Johnson-Roberson is one of those double threats who offers insight from two different — and important — perspectives. In addition to his long academic career, which most recently found him working as a professor at the University of Michigan College of Engineering, he also has a solid startup CV.

    Johnson-Roberson also co-founded and serves as the co-founder and CTO of robotic last-mile delivery startup Refraction AI.

    What role(s) will generative AI play in the future of robotics?

    Generative AI, through its ability to generate novel data and solutions, will significantly bolster the capabilities of robots. It could enable them to better generalize across a wide range of tasks, enhance their adaptability to new environments, and improve their ability to autonomously learn and evolve.

    What are your thoughts on the humanoid form factor?

    The humanoid form factor is a really complex engineering and design challenge. The desire to mimic human movement and interaction creates a high bar for actuators and control systems. It also presents unique challenges in terms of balance and coordination. Despite these challenges, the humanoid form has the potential to be extremely versatile and intuitively usable in a variety of social and practical contexts, mirroring the natural human interface and interaction. But we probably will see other platforms succeed before these.

    Following manufacturing and warehouses, what is the next major category for robotics?

    Beyond manufacturing and warehousing, the agricultural sector presents a huge opportunity for robotics to tackle challenges of labor shortage, efficiency, and sustainability. Transportation and last-mile delivery are other arenas where robotics can drive efficiency, reduce costs, and improve service levels. These domains will likely see accelerated adoption of robotic solutions as the technologies mature and as regulatory frameworks evolve to support wider deployment.

    How far out are true general-purpose robots?

    The advent of true general-purpose robots, capable of performing a wide range of tasks across different environments, may still be a distant reality. It requires breakthroughs in multiple fields including AI, machine learning, materials science, and control systems. The journey toward achieving such versatility is a step-by-step process where robots will gradually evolve from being task-specific to being more multi-functional and eventually general purpose.

    Will home robots (beyond vacuums) take off in the next decade?

    The next decade might witness the emergence of home robots in specific niches, such as eldercare or home security. However, the vision of having a general-purpose domestic robot that can autonomously perform a variety of household tasks is likely further off. The challenges are not just technological but also include aspects like affordability, user acceptance, and ethical considerations.

    What important robotics story/trend isn’t getting enough coverage?

    Despite significant advancements in certain niche areas and successful robotic implementations in specific industries, these stories often get overshadowed by the allure of more futuristic or general-purpose robotic narratives. The incremental but impactful successes in sectors like agriculture, healthcare, or specialized industrial applications deserve more spotlight as they represent the real, tangible progress in the field of robotics.

    [ad_2]

    Brian Heater

    Source link

  • ChatGPT-maker OpenAI hosts its first big tech showcase as the AI startup faces growing competition

    ChatGPT-maker OpenAI hosts its first big tech showcase as the AI startup faces growing competition

    [ad_1]

    SAN FRANCISCO — Less than a year into its meteoric rise, the company behind ChatGPT unveiled the future it has in mind for its artificial intelligence technology on Monday as it launched a new line of chatbot products that can be customized to a variety of tasks.

    “Eventually, you’ll just ask the computer for what you need and it’ll do all of these tasks for you,” said OpenAI CEO Sam Altman to a cheering crowd of more than 900 software developers and other attendees. It was OpenAI’s inaugural developer conference, embracing a Silicon Valley tradition for technology showcases that Apple helped pioneer decades ago.

    At the event held in a cavernous former Honda dealership in OpenAI’s hometown of San Francisco, the company unveiled a new version called GPT-4 Turbo that is “more capable” and can retrieve information about world and cultural events as recent as April 2023 — unlike previous versions which couldn’t answer questions about anything that happened after 2021.

    It also touted a new version of its AI model called GPT-4 with vision, or GPT-4V, that enables the chatbot to analyze images. In a September research paper, the company showed how the tool could describe what’s in images to people who are blind or have low vision.

    Altman said ChatGPT has more than 100 million weekly active users and 2 million developers, spread “entirely by word of mouth.”

    Altman also unveiled a new line of products called GPTs — emphasis on the plural — that will enable users to make their own customized versions of ChatGPT for specific tasks.

    The path to OpenAI’s debut DevDay has been an unusual one. Founded as a nonprofit research institute in 2015, it catapulted to worldwide fame just under a year ago with the release of a chatbot that’s sparked excitement, fear and a push for international safeguards to guide AI’s rapid advancement.

    The conference comes a week after President Joe Biden signed an executive order that will set some of the first U.S. guardrails on AI technology.

    Using the Defense Production Act, the order requires AI developers likely to include OpenAI, its financial backer Microsoft and competitors such as Google and Meta to share information with the government about AI systems being built with such “high levels of performance” that they could pose serious safety risks.

    The order built on voluntary commitments set by the White House that leading AI developers made earlier this year.

    A lot of expectation is also riding on the economic promise of the latest crop of generative AI tools that can produce passages of text and novel images, sounds and other media in response to written or spoken prompts.

    Altman was briefly joined on stage by Microsoft CEO Satya Nadella, who said amid cheers from the audience “we love you guys.”

    In his comments, Nadella emphasized Microsoft’s role as business partner using its data centers to give OpenAI the computing power it needs to build more advanced models.

    “I think we have the best partnership in tech. I’m excited for us to build AGI together,” Altman said, referencing his goal to build so-called artificial general intelligence that can perform just as well as — or even better than — humans in a wide variety of tasks.

    While some commercial chatbots, including Microsoft’s Bing, are now built atop OpenAI’s technology, there are a growing number of competitors including Bard, from Google, and Claude, from another San Francisco-based startup, Anthropic, led by former OpenAI employees. OpenAI also faces competition from developers of so-called open source models that publicly release their code and other aspects of the system for free.

    ChatGPT’s newest competitor is Grok, which billionaire Tesla CEO Elon Musk unveiled over the weekend on his social media platform X, formerly known as Twitter. Musk, who helped start OpenAI before parting ways with the company, launched a new venture this year called xAI to set his own mark on the pace of AI development.

    Grok is only available to a limited set of early users but promises to answer “spicy questions” that other chatbots decline due to safeguards meant to prevent offensive responses.

    Goldman Sachs projected last month that generative AI could boost labor productivity and lead to a long-term increase of 10% to 15% to the global gross domestic product — the economy’s total output of goods and services.

    Altman described a future of AI agents that could help people with various tasks at work or home.

    “We know that people want AI that is smarter, more personal, more customizable, can do more on your behalf,” he said.

    ——

    O’Brien reported from Providence, Rhode Island.

    ——-

    The Associated Press and OpenAI have a licensing agreement that allows for part of AP’s text archives to be used to train the tech company’s large language model. AP receives an undisclosed fee for use of its content.

    [ad_2]

    Source link

  • ChatGPT-maker OpenAI hosts first big tech showcase as it faces growing competition

    ChatGPT-maker OpenAI hosts first big tech showcase as it faces growing competition

    [ad_1]

    SAN FRANCISCO — Less than a year into its meteoric rise, the company behind ChatGPT unveiled the future it has in mind for its artificial intelligence technology on Monday as it launched a new line of chatbot products that can be customized to a variety of tasks.

    “Eventually, you’ll just ask the computer for what you need and it’ll do all of these tasks for you,” said OpenAI CEO Sam Altman to a cheering crowd of more than 900 software developers and other attendees. It was OpenAI’s inaugural developer conference, embracing a Silicon Valley tradition for technology showcases that Apple helped pioneer decades ago.

    At the event held in a cavernous former Honda dealership in OpenAI’s hometown of San Francisco, the company unveiled a new version called GPT-4 Turbo that is “more capable” and can retrieve information about world and cultural events as recent as April 2023 — unlike previous versions which couldn’t answer questions about anything that happened after 2021.

    It also touted a new version of its AI model called GPT-4 with vision, or GPT-4V, that enables the chatbot to analyze images. In a September research paper, the company showed how the tool could describe what’s in images to people who are blind or have low vision.

    Altman said ChatGPT has more than 100 million weekly active users and 2 million developers, spread “entirely by word of mouth.”

    Altman also unveiled a new line of products called GPTs — emphasis on the plural — that will enable users to make their own customized versions of ChatGPT for specific tasks.

    Alyssa Hwang, a computer science researcher at the University of Pennsylvania who got an early glimpse at the GPT vision tool, said it was “so good at describing a whole lot of different kinds of images, no matter how complicated they were,” but also needed some improvements.

    For instance, in trying to test its limits, Hwang appended an image of steak with a caption about chicken noodle soup, confusing the chatbot into describing the image as having something to do with chicken noodle soup.

    “That could lead to some adversarial attacks,” Hwang said. “Imagine if you put some offensive text or something like that in an image, you’ll end up getting something you don’t want.”

    That’s partly why OpenAI has given researchers such as Hwang early access to help discover flaws in its newest tools before their wide release. Altman on Monday described the company’s approach as “gradual iterative deployment” that leaves time to address safety risks.

    The path to OpenAI’s debut DevDay has been an unusual one. Founded as a nonprofit research institute in 2015, it catapulted to worldwide fame just under a year ago with the release of a chatbot that’s sparked excitement, fear and a push for international safeguards to guide AI’s rapid advancement.

    The conference comes a week after President Joe Biden signed an executive order that will set some of the first U.S. guardrails on AI technology.

    Using the Defense Production Act, the order requires AI developers likely to include OpenAI, its financial backer Microsoft and competitors such as Google and Meta to share information with the government about AI systems being built with such “high levels of performance” that they could pose serious safety risks.

    The order built on voluntary commitments set by the White House that leading AI developers made earlier this year.

    A lot of expectation is also riding on the economic promise of the latest crop of generative AI tools that can produce passages of text and novel images, sounds and other media in response to written or spoken prompts.

    Altman was briefly joined on stage by Microsoft CEO Satya Nadella, who said amid cheers from the audience “we love you guys.”

    In his comments, Nadella emphasized Microsoft’s role as a business partner using its data centers to give OpenAI the computing power it needs to build more advanced models.

    “I think we have the best partnership in tech. I’m excited for us to build AGI together,” Altman said, referencing his goal to build so-called artificial general intelligence that can perform just as well as — or even better than — humans in a wide variety of tasks.

    While some commercial chatbots, including Microsoft’s Bing, are now built atop OpenAI’s technology, there are a growing number of competitors including Bard, from Google, and Claude, from another San Francisco-based startup, Anthropic, led by former OpenAI employees. OpenAI also faces competition from developers of so-called open source models that publicly release their code and other aspects of the system for free.

    ChatGPT’s newest competitor is Grok, which billionaire Tesla CEO Elon Musk unveiled over the weekend on his social media platform X, formerly known as Twitter. Musk, who helped start OpenAI before parting ways with the company, launched a new venture this year called xAI to set his own mark on the pace of AI development.

    Grok is only available to a limited set of early users but promises to answer “spicy questions” that other chatbots decline due to safeguards meant to prevent offensive responses.

    Asked for comment on the timing of Grok’s release by a reporter, Altman said “Elon’s gonna Elon.”

    Goldman Sachs projected last month that generative AI could boost labor productivity and lead to a long-term increase of 10% to 15% to the global gross domestic product — the economy’s total output of goods and services.

    Altman described a future of AI agents that could help people with various tasks at work or home.

    “We know that people want AI that is smarter, more personal, more customizable, can do more on your behalf,” he said.

    ——

    O’Brien reported from Providence, Rhode Island.

    ——-

    The Associated Press and OpenAI have a licensing agreement that allows for part of AP’s text archives to be used to train the tech company’s large language model. AP receives an undisclosed fee for use of its content.

    [ad_2]

    Source link

  • Musk says X subscribers will get early access to xAI’s chatbot, Grok | TechCrunch

    Musk says X subscribers will get early access to xAI’s chatbot, Grok | TechCrunch

    [ad_1]

    Elon Musk’s AI startup, xAI, is creating its own version of ChatGPT.

    That appears to be the case, at least, from Musk’s tweets on X late Friday evening teasing the AI model xAI has been quietly developing. Called Grok — a name xAI trademarked recently — the model answers questions conversationally, possibly drawing on a knowledge base similar to that used to train ChatGPT and other comparable text-generating models (e.g. Meta’s Llama 2).

    Grok leverages “real-time access” to info on X, Musk said. And, like ChatGPT, the model has internet browsing capabilities, enabling it to search the web for up-to-date information about specific topics.

    Well, most topics.

    Musk implied Grok will refuse to answer certain queries of a more sensitive nature, like “Tell me how to make cocaine, step by step.” Judging by a screenshot, the model answers that particular question a bit more wryly than ChatGPT; it’s not clear if it’s a canned answer or if the system is, in fact — as Musk asserts in a tweet — “designed to have a little more humor in its responses.”

    Early Friday, Musk said that xAI would release its first AI model — presumably Grok — to a “select group” on Saturday, November 4. But in a follow-up tweet tonight, Musk said all subscribers to X’s recently launched Premium Plus plan, which costs $16 per month for ad-free access to X, will get access to Grok “once it’s out of early beta.”

    Little’s known about Grok so far — or xAI’s broader research projects, for that matter.

    In September, Oracle co-founder Larry Ellison, a self-described close friend of Musk, said that xAI had signed a contract to train its AI models on Oracle’s cloud. But xAI itself hasn’t revealed anything about those AI models’ inner workings — or, indeed, what sorts of tasks they can accomplish.

    Musk announced the launch of xAI in July with the ambitious goal of building AI to “understand the true nature of the universe.” The company, led by Musk and veterans of DeepMind, OpenAI, Google Research, Microsoft Research, Tesla and the University of Toronto, is advised by Dan Hendrycks, the director at the Center for AI Safety, an AI research nonprofit, and collaborates with X and other companies in Musk’s stead, including Tesla.

    In an interview with Tucker Carlson in April, Musk said that he wanted to build what he referred to as a “maximum-truth-seeking AI.” Is Grok this AI? Perhaps — or it’s a step toward something even bigger.

    “In some important respects, it (xAI’s new model) is the best that currently exists,” Musk was quoted as saying in a tweet Friday afternoon.

    Musk’s AI ambitions have grown since the billionaire’s split with ChatGPT developer OpenAI co-founders Sam Altman and Ilya Sutskever several years ago. As OpenAI’s focus shifted from open source research to primarily commercial projects, Musk grew disillusioned — and competitive — with the company on whose board he sat. Musk resigned from the OpenAI board in 2018, more recently cutting off the company’s access to X data after arguing that OpenAI wasn’t paying enough for the privilege.

    [ad_2]

    Kyle Wiggers

    Source link

  • Harris, Sunak to discuss cutting-edge AI risks at UK summit

    Harris, Sunak to discuss cutting-edge AI risks at UK summit

    [ad_1]

    BLETCHLEY PARK, England — British Prime Minister Rishi Sunak said Thursday that achievements at the first international AI Safety Summit would “tip the balance in favor of humanity” in the race to contain the risks from rapid advances in cutting-edge artificial intelligence.

    Speaking after two days of talks at Bletchley Park, a former codebreaking spy base near London, Sunak said agreements struck at the meeting of politicians, researchers and business leaders “show that we have both the political will and the capability to control this technology, and secure its benefits for the long term.”

    Sunak organized the summit as a forum for officials, experts and the tech industry to better understand cutting-edge, “frontier” AI that some scientists warn could pose a risk to humanity’s very existence.

    He hailed the gathering’s achievements, including a “Bletchley Declaration” committing nations to tackle the biggest threats from artificial intelligence, a deal to vet tech firms’ AI models before their release, and an agreement to call together a global expert panel on AI, inspired by the United Nations’ climate change panel.

    Some argue that governments must go further and faster on oversight. Britain has no plans for specific legislation to regulate AI, unlike the U.S. and the European Union.

    Vice President Kamala Harris attended the summit, stressing steps the Biden administration has taken to hold tech firms to account. She said Thursday that the United States’ “bold action” should be “inspiring and instructive to other nations.”

    United Nations Secretary General Antonio Guterres urged a coordinated global effort, comparing risks from AI to the Nazi threat that Britain’s wartime codebreakers worked to combat.

    “Bletchley Park played a vital part in the computing breakthroughs that helped to defeat Nazism,” he said “The threat posed by AI is more insidious – but could be just as dangerous.”

    The U.N. chief, like many others, warned about the need to act swiftly to keep pace with AI’s breathtaking advances. General purpose AI chatbots like ChatGPT released over the past year stirred both amazement and fear with their ability to generate text, audio and images that closely resembled human work.

    “The speed and reach of today’s AI technology are unprecedented,” Guterres said. “The paradox is that in the future, it will never move as slowly as today. The gap between AI and its governance is wide and growing.”

    Sunak hailed the summit as a success, despite its arguably modest achievements. He managed to get 28 nations — including the U.S. and China — to sign up to working toward “shared agreement and responsibility” about AI risks, and to hold further meetings in South Korea and France over the next year.

    China did not attend the second day, which focused on meetings among what the U.K. termed a small group of countries “with shared values.” Sunak held a roundtable with politicians from the EU, the U.N., Italy, Germany, France and Australia.

    Announcing the expert panel on Thursday, Sunak said pioneering computer scientist Yoshua Bengio, dubbed one of the “godfathers” of AI, had agreed to chair production of its first report on the state of AI science.

    Sunak said likeminded governments and AI companies also had reached a “landmark agreement” to work together on testing the safety of AI models before they’re released to the public. Leading AI companies at the meeting including OpenAI, Google’s DeepMind, Anthropic and Inflection AI have agreed to “deepen access” to their frontier AI models, he said.

    Binding regulation for AI was not among the summit’s goals. Sunak said the U.K.’s approach should not be to rush into regulation but to fully understand AI first.

    Harris emphasized the U.S. administration’s more hands-on approach in a speech at the U.S. embassy on Wednesday, saying the world needs to act right away to address “the full spectrum” of AI risks, not just existential threats such as massive cyberattacks or AI-formulated bioweapons.

    She announced a new U.S. AI safety institute to draw up standards for testing AI models for public use. She said it would collaborate with a similar U.K. institute announced by Sunak days earlier.

    One of the Biden administration’s main concerns is that advances in AI are widening inequality within societies and between countries. As a step towards addressing that, Britain’s Foreign Secretary James Cleverly announced a $100 million fund, supported by the U.K., the U.S. and others, to help ensure African countries get a share of AI’s benefits – and that 46 African languages are fed into its models.

    Cleverly told reporters that it’s crucial there is a “diversity of voice” informing AI.

    “If it was just Euro-Atlantic and China, we would miss stuff, potentially huge amounts of stuff,” he said.

    Sunak capped the summit with a cozy onstage chat with Tesla CEO Elon Musk at a business reception in London’s grand Lancaster House. Musk is among tech executives who have warned that AI could pose a risk to humanity’s future.

    “Here we are for the first time, really in human history, with something that is going to be far more intelligent than us,” Musk said at the summit. “It’s not clear to me if we can control such a thing.”

    The conversation with Sunak — streamed after it happened on the Musk-owned social network X — ranged over topics from whether AI would remove the need for work to the need to have an off-switch for humanoid robots that could turn on their makers.

    Musk likened AI to “a magic genie” that could grant all wishes, but noted that those fairytales rarely end well.

    “One of the future challenges is how do you find meaning in life?” he said.

    The pair did not take questions from journalists.

    Sunak said earlier that it was important not to be “alarmist” about the technology, which could bring huge benefits.

    “But there is a case to believe that it may pose a risk on a scale like pandemics and nuclear war, and that’s why, as leaders, we have a responsibility to act to take the steps to protect people, and that’s exactly what we’re doing,” he said.

    [ad_2]

    Source link