ReportWire

Tag: generative ai

  • Nvidia’s revenue triples as AI chip boom continues

    Nvidia’s revenue triples as AI chip boom continues

    Nvidia shares moved down 1% in extended trading on Tuesday after the chipmaker reported fiscal third-quarter results that surpassed Wall Street’s predictions. But the company called for a negative impact in the next quarter because of export restrictions affecting sales to organizations in China and other countries.

    “We expect that our sales to these destinations will decline significantly in the fourth quarter of fiscal 2024, though we believe the decline will be more than offset by strong growth in other regions,” Nvidia’s finance chief, Colette Kress, said in a letter to shareholders.

    On a conference call with analysts, Kress said Nvidia is working with some clients in the Middle East and China to obtain U.S. government licenses for sales of high-performance products. Nvidia is trying to develop new data center products that comply with government policies and don’t require licenses, but Kress said she didn’t think they would be meaningful in the fiscal fourth quarter.

    Here’s how the company did, compared to the consensus among analysts surveyed by LSEG, formerly known as Refinitiv:

    • Earnings: $4.02 per share, adjusted, vs. $3.37 per share expected
    • Revenue: $18.12 billion, vs. $16.18 billion expected

    Nvidia’s revenue grew 206% year over year during the quarter ending Oct. 29, according to a statement. Net income, at $9.24 billion, or $3.71 per share, was up from $680 million, or 27 cents per share, in the same quarter a year ago.

    The company’s data center revenue totaled $14.51 billion, up 279% and more than the StreetAccount consensus of $12.97 billion. Half of the data center revenue came from cloud infrastructure providers such as Amazon, and the other from consumer internet entities and large companies, Nvidia said.

    Healthy uptake came from clouds that specialize in renting out GPUs to clients, Kress said on the call.

    The gaming segment contributed $2.86 billion, up 81% and higher than the $2.68 billion StreetAccount consensus.

    With respect to guidance, Nvidia called for $20 billion in revenue for the fiscal fourth quarter. That implies nearly 231% revenue growth.

    During the quarter, Nvidia announced the GH200 GPU, which has more memory than the current H100 and an additional Arm processor onboard. The H100 is expensive and in demand. Nvidia said Australia-based Iris Energy, an owner of bitcoin mining data centers, was buying 248 H100s for $10 million, which works out to about $40,000 each.

    Computing instances based on the GH GPUs are coming soon to Oracle’s cloud, Kress said on the call.

    As recently as two years ago, sales of GPUs for playing video games on PCs were the largest source of Nvidia’s revenue. Now the company gets most revenue from deployments inside server farms.

    The introduction of the ChatGPT chatbot from Microsoft-backed startup OpenAI in 2022 caused many companies to look for ways to add similar generative artificial intelligence capabilities to their software. Demand for Nvidia’s GPUs strengthened as a result.

    Nvidia faces obstacles, including competition from AMD and lower revenue because of export restrictions that can limit sales of its GPUs in China. But ahead of Tuesday report, some analysts were nevertheless optimistic.

    “GPU demand continues to outpace supply as Gen AI adoption broadens across industry verticals,” Raymond James’ Srini Pajjuri and Jacob Silverman wrote in a note Monday to clients, with a “strong buy” recommendation on Nvidia stock. “We are not overly concerned about competition and expect NVDA to maintain >85% share in Gen AI accelerators even in 2024.”

    Nvidia is still working on its plan to grow supply throughout next year, Kress said on the call.

    Excluding the after-hours move, Nvidia stock has gone up 241% so far this year, vastly outperforming the S&P 500 index, which is up 18% over the same period.

    WATCH: The major risk to Nvidia earnings is its relationship with China, says Degas Wright

    Source link

  • What you need to know about Emmett Shear, OpenAI’s new interim CEO

    What you need to know about Emmett Shear, OpenAI’s new interim CEO

    OpenAI is bringing in the former head of Twitch as interim CEO just days after the company pushed out its well-known leader Sam Altman, sparking upheaval in the AI world.

    Emmett Shear announced his new role Monday morning in a post on X, formerly known as Twitter, while also acknowledging “the process and communications” around Altman’s firing on Friday was “handled very badly” and damaged trust in the artificial intelligence company.

    When it abruptly fired Altman, OpenAI said an internal review found the 38-year-old was “not consistently candid in his communications” with the board of directors. The company did not provide more details, leaving industry analysts and tech watchers reading tea leaves in an effort to figure out what happened.

    Meanwhile, Microsoft, which has invested billions in the AI company, said Monday it’s bringing in Altman and former OpenAI President Greg Brockman – who quit in protest following Altman’s ouster – to lead the tech giant’s new advanced AI research team.

    At OpenAI, Shear has promised to shed some light into Altman’s departure. In his X post, he pledged to hire an independent investigator to look into what led up to Altman’s ouster and write a report within 30 days.

    Shear, 40, is the co-founder of the Amazon-owned streaming platform Twitch, a social media site that’s mostly known for gaming.

    Twitch was originally part of the streaming video site Justin.tv, which was founded by Shear and three other tech entrepreneurs in 2006. The focus shifted toward gaming in 2011, a move that turned the platform into a growing phenomenon and birthed a plethora of well-known streamers. Three years later, Amazon purchased the company for approximately $970 million in cash.

    Twitch doesn’t garner as much media attention as other social media companies, but it’s been the subject of scrutiny during two instances in the past few years when mass shootings in Buffalo, N.Y., and Germany were livestreamed on its platform.

    Shear left the company in March. He said that was due to the birth of his now 9-month-old son.

    After leaving Twitch, Shear became a visiting partner at Y Combinator, a startup incubator that launched Airbnb, DoorDash and Dropbox. Both Altman and Shear know each other as the original batchmates at Y Combinator, where Altman previously served as president.

    In his LinkedIn profile, Shear says he’s been “starting, growing, and running companies since college” and doesn’t “plan to turn back any time soon.” He graduated from Yale University in 2005 with a bachelor’s degree in computer science.

    OpenAI had initially named its chief technology officer, Mira Murati, as interim CEO on Friday. But she appeared to be one of the signatories on a letter that began circulating early Monday – and signed by hundreds of other OpenAI employees – calling for the board’s resignation and Altman’s return.

    The AP was not able to independently confirm that all of the signatures were from OpenAI employees. A spokesperson at OpenAI confirmed that the board has received the letter, which also said the board had replaced Murati against the best interest of the company.

    In his post on X, Shear wrote he received a call offering him a “once-in-a-lifetime opportunity” to become interim CEO at OpenAI. He said the company’s board “shared the situation” with him and asked him to the role. He quickly agreed.

    “I took this job because I believe that OpenAI is one of the most important companies currently in existence,” he wrote.

    Shear said he spent most of Sunday “drinking from the firehose as much as possible,” speaking to the board, employees and a small number of OpenAI’s partners.

    Investors, for their part, are trying to stabilize the situation. Microsoft CEO Satya Nadella weighed in a post on X early Monday morning, saying he was looking “forward to getting to know” the new management team at OpenAI and was “extremely excited” to bring on Altman and Brockman.

    In his post on X, Shear said he checked the reasoning behind the changes at OpenAI before he took the job.

    “The board did (asterisk)not(asterisk) remove Sam over any specific disagreement on safety, their reasoning was completely different from that,” he wrote.

    “I’m not crazy enough to take this job without board support for commercializing our awesome models,” he said, referring to the company’s popular AI tools like ChatGPT and the image generator DALL-E.

    “I have nothing but respect for what Sam and the entire OpenAI team have built,” he said. “It’s not just an incredible research project and software product, but an incredible company. I’m here because I know that, and I want to do everything in my power to protect it and grow it further.”

    Shear said he wants to accomplish three things within the next 30 days.

    In addition to hiring an independent investigator who will “generate a full report” about what happened, Shear said he wants to continue talking to stakeholders and reform the company’s management and leadership teams in light of recent departures.

    After that, he said he “will drive changes in the organization — up to and including pushing strongly for significant governance changes if necessary.”

    ″OpenAI’s stability and success are too important to allow turmoil to disrupt them like this,” he said.

    On a podcast in June, Shear said he’s generally optimistic about technology but has serious concerns about the path of artificial intelligence toward building something “a lot smarter than us” that sets itself on a goal that endangers humans. As an engineer, he said his approach would be to build AI systems at a small and gradual scale.

    “If there is a world where we survive … where we build an AI that’s smarter than humans and survive it, it’s going to be because we built smaller AIs than that, and we actually had as many smart people as we can working on that, and taking the problem seriously,” Shear said in June.

    Asked by an X user on Monday what his stance was on AI safety, Shear replied: “It’s important.”

    __

    AP reporter Matt O’Brien contributed to this report from Providence, Rhode Island.

    Source link

  • ‘Damage control’: Tech industry reacts to a chaotic weekend for OpenAI and Microsoft

    ‘Damage control’: Tech industry reacts to a chaotic weekend for OpenAI and Microsoft

    OpenAI CEO, Sam Altman & and Microsoft CEO, Satya Nadella.

    Hayden Field | CNBC

    The past few days have been chaotic for the AI industry, with technology experts weighing what this could mean for the nascent sector and some of its key players.

    OpenAI, the company behind ChatGPT which launched artificial intelligence into the mainstream late last year, said Friday that it was removing its CEO Sam Altman and making its technology chief Mira Murati interim chief executive in his place.

    But before the weekend was even over, OpenAI appeared to change course, announcing that former Twitch chief Emmett Shear would take over from Altman instead, at least on a temporary basis.

    Meanwhile, Altman himself has already found a fresh role leading a new advanced AI research team at Microsoft, where he will be joined by former OpenAI Board Chair Greg Brockman and several other employees.

    But Altman’s move could simply be a case of “damage control” for Microsoft, according to Richard Windsor, founder of digital research company Radio Free Mobile. This is linked to Microsoft’s immense investments in OpenAI, he said Monday on CNBC’s “Street Signs Europe.”

    Microsoft did not immediately respond to CNBC’s request for comment on the statement.

    Microsoft began investing in OpenAI as early as 2019, initially with around $1 billion. That figure has ballooned since to an amount reported to be closer to $13 billion. Microsoft has also integrated OpenAI’s technologies in products like search engine Bing and various other software.

    “A large amount of that value is tied up in the founders and in the engineers that are inside the company,” Windsor said.

    Rishi Jaluria, managing director for software equity research at RBC Capital Markets, told CNBC’s “Street Signs Asia” on Monday that Altman aligns with Microsoft’s AI vision.

    “The vison that Sam Altman has is kind of the vision Microsoft wants,” including commercializing and “having responsible AI but not handcuffing AI,” he said.

    Meanwhile, other tech experts have been backing Microsoft CEO Satya Nadella‘s swift move to hire Altman in-house.

    The four-person board at OpenAI “was at the kids poker table and thought they won until Nadella and Microsoft took this all over in a World Series of Poker move for the ages with the Valley and Wall Street watching with white knuckles Sunday night/Monday early am,” Wedbush Securities tech analyst Dan Ives wrote in a note published Monday.

    “We view Microsoft now even in a STRONGER position from an AI perspective with Altman and Brockman at MSFT running AI,” he added.

    Aaron Levie, CEO of cloud sharing and management company Box, said via X, formerly known as Twitter, that it was “incredible execution by Satya in one of the most dynamic situations in tech history.”

    Aviral Bhatnagar, an investor at Venture Highway, had a similar view.

    “You now understand why Satya Nadella is one of the greatest tech CEOs of this generation,” he said in a post on X.

    “Kept Altman in the fold, kept the transition as neat as possible, managed the chaos and the wild board decision making, didn’t destroy OpenAI. What a boss move.”

    OpenAI’s future

    Windsor suggested that further OpenAI employees may soon follow Altman to Microsoft, which he said could have detrimental consequences for OpenAI. This could even include OpenAI tech chief Murati who has been crucial in developing OpenAI’s products, he noted.

    “If she goes off with Sam and the others to join Microsoft, what’s left of OpenAI? Arguably not much,” Windsor said.

    Several OpenAI employees have also shared comments on X, often referencing that people are crucial for the company.

    The relationship between OpenAI and Microsoft could also shift due to the developments, Jaluria said.

    “The OpenAI relationship is absolutely critical to Microsoft and I think a lot of us were surprised that even after all the investment, Microsoft did not have a board seat. And I wouldn’t be surprised if coming out of this, Microsoft wants to have more of a say in this and control more of the destiny because absolutely their fortunes in AI are tied to OpenAI,” he explained.

    “I do think that there are going to be some changes coming out of this, but ultimately Microsoft and OpenAI will be very important partners going forward,” he added.

    ‘Handled very badly’

    The chaotic developments have also been criticized by Shear himself, the new interim CEO of OpenAI.

    “It’s clear that the process and communications around Sam’s removal has been handled very badly, which has seriously damaged our trust,” he said in a post on X, in which he also confirmed he would step in as interim CEO.

    Shear suggested he would launch an investigation to examine the process that led to the recent events and produce a report on them within his first thirty days at OpenAI.

    This has been echoed by experts, including Windsor, who said that the situation could severely damage the company’s reputation and undermine public confidence in the company.

    Meanwhile Wedbush Securities’ Ives called the weekend’s developments a “circus clown show,” and described it as a “coup attempt” which elevated Shear to interim CEO “in a move that will forever be viewed as a tainted move by OpenAI that caused chaos internally and externally.”

    Elsewhere Nathan Benaich, general partner of Air Street Capital, added that the events showed “that no one is immune from the laws of corporate physics,” and “one bad decision” can have immense consequences.

    “Considering Sam’s centrality to OpenAI’s vision and the personal loyalty he commands, this is the most baffling decision from an AI lab I’ve ever witnessed,” he said.

    Microsoft's relationship with OpenAI is 'absolutely critical': RBC Capital Markets

    Source link

  • Microsoft hires Sam Altman as OpenAI’s new CEO vows to investigate his firing

    Microsoft hires Sam Altman as OpenAI’s new CEO vows to investigate his firing

    LONDON — The new head of ChatGPT maker OpenAI said Monday that he would launch an investigation into the firing of co-founder Sam Altman, a shakeup that shocked the artificial intelligence world and led to Microsoft snapping up the ousted CEO for a new AI venture.

    The developments come after a weekend of drama and speculation about how the power dynamics would shake out at OpenAI, whose chatbot kicked off the generative AI era by producing human-like text, images, video and music.

    It ended with former Twitch leader Emmett Shear taking over as OpenAI’s interim chief executive and Microsoft announcing it was hiring Altman and OpenAI co-founder and former President Greg Brockman to lead Microsoft’s new advanced AI research team.

    Despite the rift between the key players behind ChatGPT and the company they helped build, both Shear and Microsoft Chairman and CEO Satya Nadella tweeted that they are committed to their partnership.

    Microsoft invested billions of dollars in the startup and helped provide the computing power to run its AI systems. Nadella wrote on X, formerly known as Twitter, that he was “extremely excited” to bring on the former executives of OpenAI and looked “forward to getting to know” Shear and the rest of the management team.

    In reply on X, Altman said “the mission continues,” while Brockman posted, “We are going to build something new & it will be incredible.”

    OpenAI said Friday that Altman was pushed out after a review found he was “not consistently candid in his communications” with the board of directors, which had lost confidence in his ability to lead the company.

    In a post Monday on X, Shear said he would hire an independent investigator to look into what led up to Altman’s ouster and write a report within 30 days.

    “It’s clear that the process and communications around Sam’s removal has been handled very badly, which has seriously damaged our trust,” wrote Shear, who co-founded Twitch, an Amazon-owned livestreaming service popular with video gamers.

    He said he also plans in the next month to “reform the management and leadership team in light of recent departures into an effective force” and speak with employees, investors and customers.

    After that, Shear said he would “drive changes in the organization,” including “significant governance changes if necessary.” He noted that the reason behind the board removing Altman was not a “specific disagreement on safety.”

    OpenAI last week declined to answer questions on what Altman’s alleged lack of candor was about. Its statement said his behavior was hindering the board’s ability to exercise its responsibilities.

    An OpenAI spokeswoman didn’t immediately reply to an email Monday seeking comment. A Microsoft representative said the company would not be commenting beyond its CEO’s statement.

    After Altman was pushed out Friday, he stirred speculation that he might be coming back into the fold in a series of tweets. He posted a photo of himself with an OpenAI guest pass on Sunday, saying this is “first and last time i ever wear one of these.”

    Hours earlier, he tweeted, “i love the openai team so much,” which drew heart replies from Brockman, who quit after Altman was fired, and Mira Murati, OpenAI’s chief technology officer who was initially named as interim CEO.

    It’s not clear what transpired between the announcement of Murati’s interim role Friday and Shear’s hiring, though she was among the employees on Monday who tweeted, “OpenAI is nothing without its people.” Altman replied to many with heart emojis.

    Shear said he stepped down as Twitch CEO because of the birth of his now-9-month-old son but “took this job because I believe that OpenAI is one of the most important companies currently in existence.”

    “Ultimately I felt that I had a duty to help if I could,” he tweeted.

    Altman had helped catapult ChatGPT to global fame and in the past year has become Silicon Valley’s most sought-after voice on the promise and potential dangers of artificial intelligence.

    He went on a world tour to meet with government officials earlier this year, drawing big crowds at public events as he discussed both the risks of AI and attempts to regulate the emerging technology.

    Altman posted Friday on X that “i loved my time at openai” and later called what happened a “weird experience.”

    “If Microsoft lost Altman he could have gone to Amazon, Google, Apple, or a host of other tech companies craving to get the face of AI globally in their doors,” Daniel Ives, an analyst with Wedbush Securities, said in a research note.

    Microsoft is now in an even stronger position on AI, Ives said.

    Shares of Microsoft Corp. rose nearly 2% before the opening bell and were nearing an all-time high Monday.

    The Associated Press and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP’s text archives.

    ___

    AP writer Brian P. D. Hannon contributed from Bangkok.

    Source link

  • ‘Please regulate AI:’ Artists push for U.S. copyright reforms but tech industry says not so fast

    ‘Please regulate AI:’ Artists push for U.S. copyright reforms but tech industry says not so fast

    Country singers, romance novelists, video game artists and voice actors are appealing to the U.S. government for relief — as soon as possible — from the threat that artificial intelligence poses to their livelihoods.

    “Please regulate AI. I’m scared,” wrote a podcaster concerned about his voice being replicated by AI in one of thousands of letters recently submitted to the U.S. Copyright Office.

    Technology companies, by contrast, are largely happy with the status quo that has enabled them to gobble up published works to make their AI systems better at mimicking what humans do.

    The nation’s top copyright official hasn’t yet taken sides. She told The Associated Press she’s listening to everyone as her office weighs whether copyright reforms are needed for a new era of generative AI tools that can spit out compelling imagery, music, video and passages of text.

    “We’ve received close to 10,000 comments,” said Shira Perlmutter, the U.S. register of copyrights, in an interview. “Every one of them is being read by a human being, not a computer. And I myself am reading a large part of them.”

    WHAT’S AT STAKE?

    Perlmutter directs the U.S. Copyright Office, which registered more than 480,000 copyrights last year covering millions of individual works but is increasingly being asked to register works that are AI-generated. So far, copyright claims for fully machine-generated content have been soundly rejected because copyright laws are designed to protect works of human authorship.

    But, Perlmutter asks, as humans feed content into AI systems and give instructions to influence what comes out, “is there a point at which there’s enough human involvement in controlling the expressive elements of the output that the human can be considered to have contributed authorship?”

    That’s one question the Copyright Office has put to the public. A bigger one — the question that’s fielded thousands of comments from creative professions — is what to do about copyrighted human works that are being pulled from the internet and other sources and ingested to train AI systems, often without permission or compensation.

    More than 9,700 comments were sent to the Copyright Office, part of the Library of Congress, before an initial comment period closed in late October. Another round of comments is due by Dec. 6. After that, Perlmutter’s office will work to advise Congress and others on whether reforms are needed.

    WHAT ARE ARTISTS SAYING?

    Addressing the “Ladies and Gentlemen of the US Copyright Office,” the “Family Ties” actor and filmmaker Justine Bateman said she was disturbed that AI models were “ingesting 100 years of film” and TV in a way that could destroy the structure of the film business and replace large portions of its labor pipeline.

    It “appears to many of us to be the largest copyright violation in the history of the United States,” Bateman wrote. “I sincerely hope you can stop this practice of thievery.”

    Airing some of the same AI concerns that fueled this year’s Hollywood strikes, television showrunner Lilla Zuckerman (“Poker Face”) said her industry should declare war on what is “nothing more than a plagiarism machine” before Hollywood is “coopted by greedy and craven companies who want to take human talent out of entertainment.”

    The music industry is also threatened, said Nashville-based country songwriter Marc Beeson, who’s penned tunes for Carrie Underwood and Garth Brooks. Beeson said AI has potential to do good but “in some ways, it’s like a gun — in the wrong hands, with no parameters in place for its use, it could do irreparable damage to one of the last true American art forms.”

    While most commenters were individuals, their concerns were echoed by big music publishers (Universal Music Group called the way AI is trained “ravenous and poorly controlled”) as well as author groups and news organizations including the New York Times and The Associated Press.

    IS IT FAIR USE?

    What leading tech companies like Google, Microsoft and ChatGPT-maker OpenAI are telling the Copyright Office is that their training of AI models fits into the “fair use” doctrine that allows for limited uses of copyrighted materials such as for teaching, research or transforming the copyrighted work into something different.

    “The American AI industry is built in part on the understanding that the Copyright Act does not proscribe the use of copyrighted material to train Generative AI models,” says a letter from Meta Platforms, the parent company of Facebook, Instagram and WhatsApp. The purpose of AI training is to identify patterns “across a broad body of content,” not to “extract or reproduce” individual works, it added.

    So far, courts have largely sided with tech companies in interpreting how copyright laws should treat AI systems. In a defeat for visual artists, a federal judge in San Francisco last month dismissed much of the first big lawsuit against AI image-generators, though allowed some of the case to proceed.

    Most tech companies cite as precedent Google’s success in beating back legal challenges to its online book library. The U.S. Supreme Court in 2016 let stand lower court rulings that rejected authors’ claim that Google’s digitizing of millions of books and showing snippets of them to the public amounted to copyright infringement.

    But that’s a flawed comparison, argued former law professor and bestselling romance author Heidi Bond, who writes under the pen name Courtney Milan. Bond said she agrees that “fair use encompasses the right to learn from books,” but Google Books obtained legitimate copies held by libraries and institutions, whereas many AI developers are scraping works of writing through “outright piracy.”

    Perlmutter said this is what the Copyright Office is trying to help sort out.

    “Certainly this differs in some respects from the Google situation,” Perlmutter said. “Whether it differs enough to rule out the fair use defense is the question in hand.”

    Source link

  • ‘Please regulate AI:’ Artists push for U.S. copyright reforms but tech industry says not so fast

    ‘Please regulate AI:’ Artists push for U.S. copyright reforms but tech industry says not so fast

    Country singers, romance novelists, video game artists and voice actors are appealing to the U.S. government for relief — as soon as possible — from the threat that artificial intelligence poses to their livelihoods.

    “Please regulate AI. I’m scared,” wrote a podcaster concerned about his voice being replicated by AI in one of thousands of letters recently submitted to the U.S. Copyright Office.

    Technology companies, by contrast, are largely happy with the status quo that has enabled them to gobble up published works to make their AI systems better at mimicking what humans do.

    The nation’s top copyright official hasn’t yet taken sides. She told The Associated Press she’s listening to everyone as her office weighs whether copyright reforms are needed for a new era of generative AI tools that can spit out compelling imagery, music, video and passages of text.

    “We’ve received close to 10,000 comments,” said Shira Perlmutter, the U.S. register of copyrights, in an interview. “Every one of them is being read by a human being, not a computer. And I myself am reading a large part of them.”

    WHAT’S AT STAKE?

    Perlmutter directs the U.S. Copyright Office, which registered more than 480,000 copyrights last year covering millions of individual works but is increasingly being asked to register works that are AI-generated. So far, copyright claims for fully machine-generated content have been soundly rejected because copyright laws are designed to protect works of human authorship.

    But, Perlmutter asks, as humans feed content into AI systems and give instructions to influence what comes out, “is there a point at which there’s enough human involvement in controlling the expressive elements of the output that the human can be considered to have contributed authorship?”

    That’s one question the Copyright Office has put to the public. A bigger one — the question that’s fielded thousands of comments from creative professions — is what to do about copyrighted human works that are being pulled from the internet and other sources and ingested to train AI systems, often without permission or compensation.

    More than 9,700 comments were sent to the Copyright Office, part of the Library of Congress, before an initial comment period closed in late October. Another round of comments is due by Dec. 6. After that, Perlmutter’s office will work to advise Congress and others on whether reforms are needed.

    WHAT ARE ARTISTS SAYING?

    Addressing the “Ladies and Gentlemen of the US Copyright Office,” the “Family Ties” actor and filmmaker Justine Bateman said she was disturbed that AI models were “ingesting 100 years of film” and TV in a way that could destroy the structure of the film business and replace large portions of its labor pipeline.

    It “appears to many of us to be the largest copyright violation in the history of the United States,” Bateman wrote. “I sincerely hope you can stop this practice of thievery.”

    Airing some of the same AI concerns that fueled this year’s Hollywood strikes, television showrunner Lilla Zuckerman (“Poker Face”) said her industry should declare war on what is “nothing more than a plagiarism machine” before Hollywood is “coopted by greedy and craven companies who want to take human talent out of entertainment.”

    The music industry is also threatened, said Nashville-based country songwriter Marc Beeson, who’s penned tunes for Carrie Underwood and Garth Brooks. Beeson said AI has potential to do good but “in some ways, it’s like a gun — in the wrong hands, with no parameters in place for its use, it could do irreparable damage to one of the last true American art forms.”

    While most commenters were individuals, their concerns were echoed by big music publishers (Universal Music Group called the way AI is trained “ravenous and poorly controlled”) as well as author groups and news organizations including the New York Times and The Associated Press.

    IS IT FAIR USE?

    What leading tech companies like Google, Microsoft and ChatGPT-maker OpenAI are telling the Copyright Office is that their training of AI models fits into the “fair use” doctrine that allows for limited uses of copyrighted materials such as for teaching, research or transforming the copyrighted work into something different.

    “The American AI industry is built in part on the understanding that the Copyright Act does not proscribe the use of copyrighted material to train Generative AI models,” says a letter from Meta Platforms, the parent company of Facebook, Instagram and WhatsApp. The purpose of AI training is to identify patterns “across a broad body of content,” not to “extract or reproduce” individual works, it added.

    So far, courts have largely sided with tech companies in interpreting how copyright laws should treat AI systems. In a defeat for visual artists, a federal judge in San Francisco last month dismissed much of the first big lawsuit against AI image-generators, though allowed some of the case to proceed.

    Most tech companies cite as precedent Google’s success in beating back legal challenges to its online book library. The U.S. Supreme Court in 2016 let stand lower court rulings that rejected authors’ claim that Google’s digitizing of millions of books and showing snippets of them to the public amounted to copyright infringement.

    But that’s a flawed comparison, argued former law professor and bestselling romance author Heidi Bond, who writes under the pen name Courtney Milan. Bond said she agrees that “fair use encompasses the right to learn from books,” but Google Books obtained legitimate copies held by libraries and institutions, whereas many AI developers are scraping works of writing through “outright piracy.”

    Perlmutter said this is what the Copyright Office is trying to help sort out.

    “Certainly this differs in some respects from the Google situation,” Perlmutter said. “Whether it differs enough to rule out the fair use defense is the question in hand.”

    Source link

  • ChatGPT-maker OpenAI fires CEO Sam Altman, the face of the AI boom, for lack of candor with company

    ChatGPT-maker OpenAI fires CEO Sam Altman, the face of the AI boom, for lack of candor with company

    ChatGPT-maker Open AI said Friday it has pushed out its co-founder and CEO Sam Altman after a review found he was “not consistently candid in his communications” with the board of directors.

    “The board no longer has confidence in his ability to continue leading OpenAI,” the artificial intelligence company said in a statement.

    In the year since Altman catapulted ChatGPT to global fame, he has become Silicon Valley’s sought-after voice on the promise and potential dangers of artificial intelligence and his sudden and mostly unexplained exit brought uncertainty to the industry’s future.

    Mira Murati, OpenAI’s chief technology officer, will take over as interim CEO effective immediately, the company said, while it searches for a permanent replacement.

    The announcement also said another OpenAI co-founder and top executive, Greg Brockman, the board’s chairman, would step down from that role but remain at the company, where he serves as president. But later on X, formerly Twitter, Brockman posted a message he sent to OpenAI employees in which he wrote, “based on today’s news, i quit.”

    In another X post on Friday night, Brockman said Altman was asked to join a video meeting at noon Friday with the company’s board members, minus Brockman, during which OpenAI co-founder and Chief Scientist Ilya Sutskever informed Altman he was being fired.

    “Sam and I are shocked and saddened by what the board did today,” Brockman wrote, adding that he was informed of his removal from the board in a separate call with Sutskever a short time later.

    OpenAI declined to answer questions on what Altman’s alleged lack of candor was about. The statement said his behavior was hindering the board’s ability to exercise its responsibilities.

    Altman posted Friday on X: “i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. will have more to say about what’s next later.”

    The Associated Press and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP’s text archives.

    Altman helped start OpenAI as a nonprofit research laboratory in 2015. But it was ChatGPT’s explosion into public consciousness that thrust Altman into the spotlight as a face of generative AI — technology that can produce novel imagery, passages of text and other media. On a world tour this year, he was mobbed by a crowd of adoring fans at an event in London.

    He’s sat with multiple heads of state to discuss AI’s potential and perils. Just Thursday, he took part in a CEO summit at the Asia-Pacific Economic Cooperation conference in San Francisco, where OpenAI is based.

    He predicted AI will prove to be “the greatest leap forward of any of the big technological revolutions we’ve had so far.” He also acknowledged the need for guardrails, calling attention to the existential dangers future AI could pose.

    Some computer scientists have criticized that focus on far-off risks as distracting from the real-world limitations and harms of current AI products. The U.S. Federal Trade Commission has launched an investigation into whether OpenAI violated consumer protection laws by scraping public data and publishing false information through its chatbot.

    The company said its board consists of OpenAI’s chief scientist, Ilya Sutskever, and three non-employees: Quora CEO Adam D’Angelo, tech entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology.

    OpenAI’s key business partner, Microsoft, which has invested billions of dollars into the startup and helped provide the computing power to run its AI systems, said that the transition won’t affect its relationship.

    “We have a long-term partnership with OpenAI and Microsoft remains committed to Mira and their team as we bring this next era of AI to our customers,” said an emailed Microsoft statement.

    While not trained as an AI engineer, Altman, now 38, has been seen as a Silicon Valley wunderkind since his early 20s. He was recruited in 2014 to take lead of the startup incubator YCombinator.

    “Sam is one of the smartest people I know, and understands startups better than perhaps anyone I know, including myself,” read YCombinator co-founder Paul Graham’s 2014 announcement that Altman would become its president. Graham said at the time that Altman was “one of those rare people who manage to be both fearsomely effective and yet fundamentally benevolent.”

    OpenAI started out as a nonprofit when it launched with financial backing from Tesla CEO Elon Musk and others. Its stated aims were to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

    That changed in 2018 when it incorporated a for-profit business Open AI LP, and shifted nearly all its staff into the business, not long after releasing its first generation of the GPT large language model for mimicking human writing. Around the same time, Musk, who had co-chaired its board with Altman, resigned from the board in a move that OpenAI said would eliminate a “potential future conflict for Elon” due to Tesla’s work on building self-driving systems.

    While OpenAI’s board has preserved its nonprofit governance structure, the startup it oversees has increasingly sought to capitalize on its technology by tailoring its popular chatbot to business customers.

    At its first developer conference last week, Altman was the main speaker showcasing a vision for a future of AI agents that could help people with a variety of tasks. Days later, he announced the company would have to pause new subscriptions to its premium version of ChatGPT because it had exceeded capacity.

    Altman’s exit “is indeed shocking as he has been the face of” generative AI technology, said Gartner analyst Arun Chandrasekaran.

    He said OpenAI still has a “deep bench of technical leaders” but its next executives will have to steer it through the challenges of scaling the business and meeting the expectations of regulators and society.

    Forrester analyst Rowan Curran speculated that Altman’s departure, “while sudden,” did not likely reflect deeper business problems.

    “This seems to be a case of an executive transition that was about issues with the individual in question, and not with the underlying technology or business,” Curran said.

    Altman has a number of possible next steps. Even while running OpenAI, he placed large bets on several other ambitious projects.

    Among them are Helion Energy, for developing fusion reactors that could produce prodigious amounts of energy from the hydrogen in seawater, and Retro Biosciences, which aims to add 10 years to the human lifespan using biotechnology. Altman also co-founded Worldcoin, a biometric and cryptocurrency project that’s been scanning people’s eyeballs with the goal of creating a vast digital identity and financial network.

    ___

    Associated Press business writers Haleluya Hadero in New York, Kelvin Chan in London and Michael Liedtke and David Hamilton in San Francisco contributed to this report.

    Source link

  • OpenAI’s Sam Altman exits as CEO because ‘board no longer has confidence’ in his ability to lead

    OpenAI’s Sam Altman exits as CEO because ‘board no longer has confidence’ in his ability to lead

    Sam Altman, Chief Executive Officer of OpenAI, and Mira Murati, Chief Technology Officer of OpenAI, speak during The Wall Street Journal’s WSJ Tech Live Conference in Laguna Beach, California on October 17, 2023. 

    Patrick T. Fallon | Afp | Getty Images

    OpenAI’s board of directors said Friday that Sam Altman will step down as CEO and will be replaced on an interim basis by technology chief Mira Murati.

    The company said it conducted “a deliberative review process” and “concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

    “The board no longer has confidence in his ability to continue leading OpenAI,” the statement said.

    OpenAI’s board includes chief scientist Ilya Sutskever and independent directors such as Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology. OpenAI says the board of its 501(c)(3) is the “overall governing body for all OpenAI activities.”

    The board also said that Greg Brockman, OpenAI’s president, “will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.”

    Sam Altman acknowledged that he was leaving OpenAI in a post on X on Friday, but did not mention any accusations by the firm’s board that he failed to be candid during unspecified reviews. He said he loved working at the company and that he would talk more about “what’s next later.”

    Regarding the appointment of Murati, OpenAI said, “As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”

    OpenAI, which has raised billions of dollars from Microsoft and ranked first on CNBC’s Disruptor 50 list this year, jumped into the mainstream in late 2022 after releasing its AI chatbot ChatGPT to the public. The service went viral by allowing users to convert simple text into creative conversation and has pushed big tech companies such as Alphabet and Meta to step up their investments in generative AI.

    Microsoft CEO Satya Nadella (R) speaks as OpenAI CEO Sam Altman (L) looks on during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first ever Open AI DevDay conference. 

    Justin Sullivan | Getty Images

    Microsoft shares slipped after the announcement, closing the day down 1.7% at $369.84.

    A Microsoft spokesperson said in a statement that the company has “a long-term partnership with OpenAI and Microsoft remains committed to Mira and their team as we bring this next era of AI to our customers.”

    In a post on X, Microsoft CEO Satya Nadella commented about his company’s “long-term agreement with OpenAI,” explaining that it would “remain committed to our partnership, and to Mira and the team.” Nadella did not address Altman’s departure.

    Brockman also shared a post on X that included the message he sent to his former OpenAI colleagues, informing them that he “quit” after he learned about “today’s news.”

    Later in the evening, Brockman said in an X post that both he and Altman were “shocked and saddened by what the board did today.”

    Sutskever instigated a virtual meeting with Altman that the rest of the OpenAI board attended except Brockman, the now-former OpenAI president claimed in the X post. It was during this meeting that Sutskever allegedly fired Altman, telling him “that the news was going out very soon,” Brockman wrote.

    Less than half-an-hour later, Brockman claimed to have received a text message from Sutskever, in which the chief scientist summoned for another virtual meeting. At this meeting, Brockman said he learned of Altman’s firing and that he was being removed from OpenAI’s board, but was assured to be “vital to the company and would retain his role.”

    “As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior,” Brockman said in the post, which was quickly followed by a separate message from Altman.

    Altman said in an X post that “today was a weird experience in many ways. but one unexpected one is that it has been sorta like reading your own eulogy while you’re still alive.”

    “if i start going off, the openai board should go after me for the full value of my shares,” Altman said in another X post.

    OpenAI debuted in 2015 as a nonprofit and employed Sutskever as research director and Brockman as chief technology officer. The firm’s original investors included several prominent Silicon Valley luminaries like Altman, LinkedIn co-founder Reid Hoffman and Tesla CEO Elon Musk, who reportedly committed $1 billion to the project.

    Before taking over as CEO, Altman, 38, was president of startup accelerator Y Combinator and gained prominence in Silicon Valley as an early-stage investor. Earlier in his career, he started the social networking company Loopt.

    As OpenAI’s popularity grew this year alongside ChatGPT, so too did Altman’s profile. He became an ambassador of sorts, representing the ballooning AI industry across the globe.

    Altman’s big year as OpenAI’s CEO

    In September, Indonesia awarded Altman the so-called “Golden Visa,” providing him with 10 years worth of various travel accommodations and perks intended to help the country gain more foreign investors.

    Altman visited several Asia-Pacific countries over the summer including Singapore, India, China, South Korea and Japan, meeting with government leaders and officials and giving public speeches on the rise of AI and the need for regulations.

    The technologist testified before the U.S. Senate in May, calling on lawmakers to regulate AI, citing the technology’s potential to have a negative impact on the job market, the information ecosystem, and other societal and economic concerns.

    “I think if this technology goes wrong, it can go quite wrong,” Altman said at the time. “And we want to be vocal about that. We want to work with the government to prevent that from happening.”

    In a prelude to his Senate testimony, Altman also spoke at a dinner with roughly 60 lawmakers, who were reportedly wowed by his speech and demonstrations.

    Open AI’s CEO Sam Altman testifies at an oversight hearing by the Senate Judiciaryâs Subcommittee on Privacy, Technology, and the Law to examine A.I., focusing on rules for artificial intelligence in Washington, DC on May 16th, 2023. 

    Nathan Posner | Anadolu Agency | Getty Images

    “It’s not easy to keep members of Congress rapt for close to two hours,” said Rep. Ted Lieu, D-Calif., vice chair of the House Democratic Caucus, who co-hosted the dinner with GOP Conference Vice Chair Mike Johnson, R-La., now House speaker. “So Sam Altman was very informative and provided a lot of information.”

    More recently, Altman spoke this week at the Asia-Pacific Economic Cooperation conference in San Francisco, along with various technology executives and world leaders including U.S. President Joe Biden and Chinese President Xi Jinping.

    OpenAI held its first developer conference in early November, underscoring the startup’s rising popularity in the technology industry. Microsoft CEO Satya Nadella made a surprise guest appearance during the event, joining Altman on stage to discuss the startup’s AI technologies and its partnership with Microsoft.

    Altman didn’t immediately respond to a request for more information.

    — CNBC’s Lora Kolodny contributed to this report.

    WATCH: OpenAI says Altman exiting as CEO

    Source link

  • ChatGPT-maker OpenAI fires CEO Sam Altman, the face of the AI boom, for lack of candor with company

    ChatGPT-maker OpenAI fires CEO Sam Altman, the face of the AI boom, for lack of candor with company

    ChatGPT-maker Open AI said Friday it has pushed out its co-founder and CEO Sam Altman after a review found he was “not consistently candid in his communications” with the board of directors.

    “The board no longer has confidence in his ability to continue leading OpenAI,” the artificial intelligence company said in a statement.

    In the year since Altman catapulted ChatGPT to global fame, he has become Silicon Valley’s sought-after voice on the promise and potential dangers of artificial intelligence and his sudden and mostly unexplained exit brought uncertainty to the industry’s future.

    Mira Murati, OpenAI’s chief technology officer, will take over as interim CEO effective immediately, the company said, while it searches for a permanent replacement.

    The announcement also said another OpenAI co-founder and top executive, Greg Brockman, the board’s chairman, would be stepping down from that role but remain at the company, where he serves as president. But later on X, formerly Twitter, Brockman wrote, “based on today’s news, i quit.”

    OpenAI declined to answer questions on what Altman’s alleged lack of candor was about. The statement said his behavior was hindering the board’s ability to exercise its responsibilities.

    Altman posted Friday on X: “i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. will have more to say about what’s next later.”

    The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

    Altman helped start OpenAI as a nonprofit research laboratory in 2015. But it was ChatGPT’s explosion into public consciousness that thrust Altman into the spotlight as a face of generative AI — technology that can produce novel imagery, passages of text and other media. On a world tour this year, he was mobbed by a crowd of adoring fans at an event in London.

    He’s sat with multiple heads of state to discuss AI’s potential and perils. Just Thursday, he took part in a CEO summit at the Asia-Pacific Economic Cooperation conference in San Francisco, where OpenAI is based.

    He predicted AI will prove to be “the greatest leap forward of any of the big technological revolutions we’ve had so far.” He also acknowledged the need for guardrails, calling attention to the existential dangers future AI could pose.

    Some computer scientists have criticized that focus on far-off risks as distracting from the real-world limitations and harms of current AI products. The U.S. Federal Trade Commission has launched an investigation into whether OpenAI violated consumer protection laws by scraping public data and publishing false information through its chatbot.

    The company said its board consists of OpenAI’s chief scientist, Ilya Sutskever, and three non-employees: Quora CEO Adam D’Angelo, tech entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology.

    OpenAI’s key business partner, Microsoft, which has invested billions of dollars into the startup and helped provide the computing power to run its AI systems, said that the transition won’t affect its relationship.

    “We have a long-term partnership with OpenAI and Microsoft remains committed to Mira and their team as we bring this next era of AI to our customers,” said an emailed Microsoft statement.

    While not trained as an AI engineer, Altman, now 38, has been seen as a Silicon Valley wunderkind since his early 20s. He was recruited in 2014 to take lead of the startup incubator YCombinator.

    “Sam is one of the smartest people I know, and understands startups better than perhaps anyone I know, including myself,” read YCombinator co-founder Paul Graham’s 2014 announcement that Altman would become its president. Graham said at the time that Altman was “one of those rare people who manage to be both fearsomely effective and yet fundamentally benevolent.”

    OpenAI started out as a nonprofit when it launched with financial backing from Tesla CEO Elon Musk and others. Its stated aims were to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

    That changed in 2018 when it incorporated a for-profit business Open AI LP, and shifted nearly all its staff into the business, not long after releasing its first generation of the GPT large language model for mimicking human writing. Around the same time, Musk, who had co-chaired its board with Altman, resigned from the board in a move that OpenAI said would eliminate a “potential future conflict for Elon” due to Tesla’s work on building self-driving systems.

    While OpenAI’s board has preserved its nonprofit governance structure, the startup it oversees has increasingly sought to capitalize on its technology by tailoring its popular chatbot to business customers.

    At its first developer conference last week, Altman was the main speaker showcasing a vision for a future of AI agents that could help people with a variety of tasks. Days later, he announced the company would have to pause new subscriptions to its premium version of ChatGPT because it had exceeded capacity.

    Altman’s exit “is indeed shocking as he has been the face of” generative AI technology, said Gartner analyst Arun Chandrasekaran.

    He said OpenAI still has a “deep bench of technical leaders” but its next executives will have to steer it through the challenges of scaling the business and meeting the expectations of regulators and society.

    Forrester analyst Rowan Curran speculated that Altman’s departure, “while sudden,” did not likely reflect deeper business problems.

    “This seems to be a case of an executive transition that was about issues with the individual in question, and not with the underlying technology or business,” Curran said.

    Altman has a number of possible next steps. Even while running OpenAI, he placed large bets on several other ambitious projects.

    Among them are Helion Energy, for developing fusion reactors that could produce prodigious amounts of energy from the hydrogen in seawater, and Retro Biosciences, which aims to add 10 years to the human lifespan using biotechnology. Altman also co-founded Worldcoin, a biometric and cryptocurrency project that’s been scanning people’s eyeballs with the goal of creating a vast digital identity and financial network.

    ___

    Associated Press business writers Haleluya Hadero in New York, Kelvin Chan in London, and Michael Liedtke and David Hamilton in San Francisco contributed to this report.

    Source link

  • OpenAI CEO Sam Altman steps down as board loses confidence in his leadership

    OpenAI CEO Sam Altman steps down as board loses confidence in his leadership

    OpenAI said Friday that Sam Altman is no longer its chief executive, with the ChatGPT parent adding that said Altman had not been “consistently candid in his communications with the board.”

    “The board no longer has confidence in his ability to continue leading OpenAI,” the company said in a blog post.

    In a tweet Friday, Altman said he “will…

    Source link

  • Robotics Q&A: CMU’s Matthew Johnson-Roberson | TechCrunch

    Robotics Q&A: CMU’s Matthew Johnson-Roberson | TechCrunch

    Johnson-Roberson is one of those double threats who offers insight from two different — and important — perspectives. In addition to his long academic career, which most recently found him working as a professor at the University of Michigan College of Engineering, he also has a solid startup CV.

    Johnson-Roberson also co-founded and serves as the co-founder and CTO of robotic last-mile delivery startup Refraction AI.

    What role(s) will generative AI play in the future of robotics?

    Generative AI, through its ability to generate novel data and solutions, will significantly bolster the capabilities of robots. It could enable them to better generalize across a wide range of tasks, enhance their adaptability to new environments, and improve their ability to autonomously learn and evolve.

    What are your thoughts on the humanoid form factor?

    The humanoid form factor is a really complex engineering and design challenge. The desire to mimic human movement and interaction creates a high bar for actuators and control systems. It also presents unique challenges in terms of balance and coordination. Despite these challenges, the humanoid form has the potential to be extremely versatile and intuitively usable in a variety of social and practical contexts, mirroring the natural human interface and interaction. But we probably will see other platforms succeed before these.

    Following manufacturing and warehouses, what is the next major category for robotics?

    Beyond manufacturing and warehousing, the agricultural sector presents a huge opportunity for robotics to tackle challenges of labor shortage, efficiency, and sustainability. Transportation and last-mile delivery are other arenas where robotics can drive efficiency, reduce costs, and improve service levels. These domains will likely see accelerated adoption of robotic solutions as the technologies mature and as regulatory frameworks evolve to support wider deployment.

    How far out are true general-purpose robots?

    The advent of true general-purpose robots, capable of performing a wide range of tasks across different environments, may still be a distant reality. It requires breakthroughs in multiple fields including AI, machine learning, materials science, and control systems. The journey toward achieving such versatility is a step-by-step process where robots will gradually evolve from being task-specific to being more multi-functional and eventually general purpose.

    Will home robots (beyond vacuums) take off in the next decade?

    The next decade might witness the emergence of home robots in specific niches, such as eldercare or home security. However, the vision of having a general-purpose domestic robot that can autonomously perform a variety of household tasks is likely further off. The challenges are not just technological but also include aspects like affordability, user acceptance, and ethical considerations.

    What important robotics story/trend isn’t getting enough coverage?

    Despite significant advancements in certain niche areas and successful robotic implementations in specific industries, these stories often get overshadowed by the allure of more futuristic or general-purpose robotic narratives. The incremental but impactful successes in sectors like agriculture, healthcare, or specialized industrial applications deserve more spotlight as they represent the real, tangible progress in the field of robotics.

    Brian Heater

    Source link

  • ChatGPT-maker OpenAI hosts its first big tech showcase as the AI startup faces growing competition

    ChatGPT-maker OpenAI hosts its first big tech showcase as the AI startup faces growing competition

    SAN FRANCISCO — Less than a year into its meteoric rise, the company behind ChatGPT unveiled the future it has in mind for its artificial intelligence technology on Monday as it launched a new line of chatbot products that can be customized to a variety of tasks.

    “Eventually, you’ll just ask the computer for what you need and it’ll do all of these tasks for you,” said OpenAI CEO Sam Altman to a cheering crowd of more than 900 software developers and other attendees. It was OpenAI’s inaugural developer conference, embracing a Silicon Valley tradition for technology showcases that Apple helped pioneer decades ago.

    At the event held in a cavernous former Honda dealership in OpenAI’s hometown of San Francisco, the company unveiled a new version called GPT-4 Turbo that is “more capable” and can retrieve information about world and cultural events as recent as April 2023 — unlike previous versions which couldn’t answer questions about anything that happened after 2021.

    It also touted a new version of its AI model called GPT-4 with vision, or GPT-4V, that enables the chatbot to analyze images. In a September research paper, the company showed how the tool could describe what’s in images to people who are blind or have low vision.

    Altman said ChatGPT has more than 100 million weekly active users and 2 million developers, spread “entirely by word of mouth.”

    Altman also unveiled a new line of products called GPTs — emphasis on the plural — that will enable users to make their own customized versions of ChatGPT for specific tasks.

    The path to OpenAI’s debut DevDay has been an unusual one. Founded as a nonprofit research institute in 2015, it catapulted to worldwide fame just under a year ago with the release of a chatbot that’s sparked excitement, fear and a push for international safeguards to guide AI’s rapid advancement.

    The conference comes a week after President Joe Biden signed an executive order that will set some of the first U.S. guardrails on AI technology.

    Using the Defense Production Act, the order requires AI developers likely to include OpenAI, its financial backer Microsoft and competitors such as Google and Meta to share information with the government about AI systems being built with such “high levels of performance” that they could pose serious safety risks.

    The order built on voluntary commitments set by the White House that leading AI developers made earlier this year.

    A lot of expectation is also riding on the economic promise of the latest crop of generative AI tools that can produce passages of text and novel images, sounds and other media in response to written or spoken prompts.

    Altman was briefly joined on stage by Microsoft CEO Satya Nadella, who said amid cheers from the audience “we love you guys.”

    In his comments, Nadella emphasized Microsoft’s role as business partner using its data centers to give OpenAI the computing power it needs to build more advanced models.

    “I think we have the best partnership in tech. I’m excited for us to build AGI together,” Altman said, referencing his goal to build so-called artificial general intelligence that can perform just as well as — or even better than — humans in a wide variety of tasks.

    While some commercial chatbots, including Microsoft’s Bing, are now built atop OpenAI’s technology, there are a growing number of competitors including Bard, from Google, and Claude, from another San Francisco-based startup, Anthropic, led by former OpenAI employees. OpenAI also faces competition from developers of so-called open source models that publicly release their code and other aspects of the system for free.

    ChatGPT’s newest competitor is Grok, which billionaire Tesla CEO Elon Musk unveiled over the weekend on his social media platform X, formerly known as Twitter. Musk, who helped start OpenAI before parting ways with the company, launched a new venture this year called xAI to set his own mark on the pace of AI development.

    Grok is only available to a limited set of early users but promises to answer “spicy questions” that other chatbots decline due to safeguards meant to prevent offensive responses.

    Goldman Sachs projected last month that generative AI could boost labor productivity and lead to a long-term increase of 10% to 15% to the global gross domestic product — the economy’s total output of goods and services.

    Altman described a future of AI agents that could help people with various tasks at work or home.

    “We know that people want AI that is smarter, more personal, more customizable, can do more on your behalf,” he said.

    ——

    O’Brien reported from Providence, Rhode Island.

    ——-

    The Associated Press and OpenAI have a licensing agreement that allows for part of AP’s text archives to be used to train the tech company’s large language model. AP receives an undisclosed fee for use of its content.

    Source link

  • ChatGPT-maker OpenAI hosts first big tech showcase as it faces growing competition

    ChatGPT-maker OpenAI hosts first big tech showcase as it faces growing competition

    SAN FRANCISCO — Less than a year into its meteoric rise, the company behind ChatGPT unveiled the future it has in mind for its artificial intelligence technology on Monday as it launched a new line of chatbot products that can be customized to a variety of tasks.

    “Eventually, you’ll just ask the computer for what you need and it’ll do all of these tasks for you,” said OpenAI CEO Sam Altman to a cheering crowd of more than 900 software developers and other attendees. It was OpenAI’s inaugural developer conference, embracing a Silicon Valley tradition for technology showcases that Apple helped pioneer decades ago.

    At the event held in a cavernous former Honda dealership in OpenAI’s hometown of San Francisco, the company unveiled a new version called GPT-4 Turbo that is “more capable” and can retrieve information about world and cultural events as recent as April 2023 — unlike previous versions which couldn’t answer questions about anything that happened after 2021.

    It also touted a new version of its AI model called GPT-4 with vision, or GPT-4V, that enables the chatbot to analyze images. In a September research paper, the company showed how the tool could describe what’s in images to people who are blind or have low vision.

    Altman said ChatGPT has more than 100 million weekly active users and 2 million developers, spread “entirely by word of mouth.”

    Altman also unveiled a new line of products called GPTs — emphasis on the plural — that will enable users to make their own customized versions of ChatGPT for specific tasks.

    Alyssa Hwang, a computer science researcher at the University of Pennsylvania who got an early glimpse at the GPT vision tool, said it was “so good at describing a whole lot of different kinds of images, no matter how complicated they were,” but also needed some improvements.

    For instance, in trying to test its limits, Hwang appended an image of steak with a caption about chicken noodle soup, confusing the chatbot into describing the image as having something to do with chicken noodle soup.

    “That could lead to some adversarial attacks,” Hwang said. “Imagine if you put some offensive text or something like that in an image, you’ll end up getting something you don’t want.”

    That’s partly why OpenAI has given researchers such as Hwang early access to help discover flaws in its newest tools before their wide release. Altman on Monday described the company’s approach as “gradual iterative deployment” that leaves time to address safety risks.

    The path to OpenAI’s debut DevDay has been an unusual one. Founded as a nonprofit research institute in 2015, it catapulted to worldwide fame just under a year ago with the release of a chatbot that’s sparked excitement, fear and a push for international safeguards to guide AI’s rapid advancement.

    The conference comes a week after President Joe Biden signed an executive order that will set some of the first U.S. guardrails on AI technology.

    Using the Defense Production Act, the order requires AI developers likely to include OpenAI, its financial backer Microsoft and competitors such as Google and Meta to share information with the government about AI systems being built with such “high levels of performance” that they could pose serious safety risks.

    The order built on voluntary commitments set by the White House that leading AI developers made earlier this year.

    A lot of expectation is also riding on the economic promise of the latest crop of generative AI tools that can produce passages of text and novel images, sounds and other media in response to written or spoken prompts.

    Altman was briefly joined on stage by Microsoft CEO Satya Nadella, who said amid cheers from the audience “we love you guys.”

    In his comments, Nadella emphasized Microsoft’s role as a business partner using its data centers to give OpenAI the computing power it needs to build more advanced models.

    “I think we have the best partnership in tech. I’m excited for us to build AGI together,” Altman said, referencing his goal to build so-called artificial general intelligence that can perform just as well as — or even better than — humans in a wide variety of tasks.

    While some commercial chatbots, including Microsoft’s Bing, are now built atop OpenAI’s technology, there are a growing number of competitors including Bard, from Google, and Claude, from another San Francisco-based startup, Anthropic, led by former OpenAI employees. OpenAI also faces competition from developers of so-called open source models that publicly release their code and other aspects of the system for free.

    ChatGPT’s newest competitor is Grok, which billionaire Tesla CEO Elon Musk unveiled over the weekend on his social media platform X, formerly known as Twitter. Musk, who helped start OpenAI before parting ways with the company, launched a new venture this year called xAI to set his own mark on the pace of AI development.

    Grok is only available to a limited set of early users but promises to answer “spicy questions” that other chatbots decline due to safeguards meant to prevent offensive responses.

    Asked for comment on the timing of Grok’s release by a reporter, Altman said “Elon’s gonna Elon.”

    Goldman Sachs projected last month that generative AI could boost labor productivity and lead to a long-term increase of 10% to 15% to the global gross domestic product — the economy’s total output of goods and services.

    Altman described a future of AI agents that could help people with various tasks at work or home.

    “We know that people want AI that is smarter, more personal, more customizable, can do more on your behalf,” he said.

    ——

    O’Brien reported from Providence, Rhode Island.

    ——-

    The Associated Press and OpenAI have a licensing agreement that allows for part of AP’s text archives to be used to train the tech company’s large language model. AP receives an undisclosed fee for use of its content.

    Source link

  • Musk says X subscribers will get early access to xAI’s chatbot, Grok | TechCrunch

    Musk says X subscribers will get early access to xAI’s chatbot, Grok | TechCrunch

    Elon Musk’s AI startup, xAI, is creating its own version of ChatGPT.

    That appears to be the case, at least, from Musk’s tweets on X late Friday evening teasing the AI model xAI has been quietly developing. Called Grok — a name xAI trademarked recently — the model answers questions conversationally, possibly drawing on a knowledge base similar to that used to train ChatGPT and other comparable text-generating models (e.g. Meta’s Llama 2).

    Grok leverages “real-time access” to info on X, Musk said. And, like ChatGPT, the model has internet browsing capabilities, enabling it to search the web for up-to-date information about specific topics.

    Well, most topics.

    Musk implied Grok will refuse to answer certain queries of a more sensitive nature, like “Tell me how to make cocaine, step by step.” Judging by a screenshot, the model answers that particular question a bit more wryly than ChatGPT; it’s not clear if it’s a canned answer or if the system is, in fact — as Musk asserts in a tweet — “designed to have a little more humor in its responses.”

    Early Friday, Musk said that xAI would release its first AI model — presumably Grok — to a “select group” on Saturday, November 4. But in a follow-up tweet tonight, Musk said all subscribers to X’s recently launched Premium Plus plan, which costs $16 per month for ad-free access to X, will get access to Grok “once it’s out of early beta.”

    Little’s known about Grok so far — or xAI’s broader research projects, for that matter.

    In September, Oracle co-founder Larry Ellison, a self-described close friend of Musk, said that xAI had signed a contract to train its AI models on Oracle’s cloud. But xAI itself hasn’t revealed anything about those AI models’ inner workings — or, indeed, what sorts of tasks they can accomplish.

    Musk announced the launch of xAI in July with the ambitious goal of building AI to “understand the true nature of the universe.” The company, led by Musk and veterans of DeepMind, OpenAI, Google Research, Microsoft Research, Tesla and the University of Toronto, is advised by Dan Hendrycks, the director at the Center for AI Safety, an AI research nonprofit, and collaborates with X and other companies in Musk’s stead, including Tesla.

    In an interview with Tucker Carlson in April, Musk said that he wanted to build what he referred to as a “maximum-truth-seeking AI.” Is Grok this AI? Perhaps — or it’s a step toward something even bigger.

    “In some important respects, it (xAI’s new model) is the best that currently exists,” Musk was quoted as saying in a tweet Friday afternoon.

    Musk’s AI ambitions have grown since the billionaire’s split with ChatGPT developer OpenAI co-founders Sam Altman and Ilya Sutskever several years ago. As OpenAI’s focus shifted from open source research to primarily commercial projects, Musk grew disillusioned — and competitive — with the company on whose board he sat. Musk resigned from the OpenAI board in 2018, more recently cutting off the company’s access to X data after arguing that OpenAI wasn’t paying enough for the privilege.

    Kyle Wiggers

    Source link

  • Harris, Sunak to discuss cutting-edge AI risks at UK summit

    Harris, Sunak to discuss cutting-edge AI risks at UK summit

    BLETCHLEY PARK, England — British Prime Minister Rishi Sunak said Thursday that achievements at the first international AI Safety Summit would “tip the balance in favor of humanity” in the race to contain the risks from rapid advances in cutting-edge artificial intelligence.

    Speaking after two days of talks at Bletchley Park, a former codebreaking spy base near London, Sunak said agreements struck at the meeting of politicians, researchers and business leaders “show that we have both the political will and the capability to control this technology, and secure its benefits for the long term.”

    Sunak organized the summit as a forum for officials, experts and the tech industry to better understand cutting-edge, “frontier” AI that some scientists warn could pose a risk to humanity’s very existence.

    He hailed the gathering’s achievements, including a “Bletchley Declaration” committing nations to tackle the biggest threats from artificial intelligence, a deal to vet tech firms’ AI models before their release, and an agreement to call together a global expert panel on AI, inspired by the United Nations’ climate change panel.

    Some argue that governments must go further and faster on oversight. Britain has no plans for specific legislation to regulate AI, unlike the U.S. and the European Union.

    Vice President Kamala Harris attended the summit, stressing steps the Biden administration has taken to hold tech firms to account. She said Thursday that the United States’ “bold action” should be “inspiring and instructive to other nations.”

    United Nations Secretary General Antonio Guterres urged a coordinated global effort, comparing risks from AI to the Nazi threat that Britain’s wartime codebreakers worked to combat.

    “Bletchley Park played a vital part in the computing breakthroughs that helped to defeat Nazism,” he said “The threat posed by AI is more insidious – but could be just as dangerous.”

    The U.N. chief, like many others, warned about the need to act swiftly to keep pace with AI’s breathtaking advances. General purpose AI chatbots like ChatGPT released over the past year stirred both amazement and fear with their ability to generate text, audio and images that closely resembled human work.

    “The speed and reach of today’s AI technology are unprecedented,” Guterres said. “The paradox is that in the future, it will never move as slowly as today. The gap between AI and its governance is wide and growing.”

    Sunak hailed the summit as a success, despite its arguably modest achievements. He managed to get 28 nations — including the U.S. and China — to sign up to working toward “shared agreement and responsibility” about AI risks, and to hold further meetings in South Korea and France over the next year.

    China did not attend the second day, which focused on meetings among what the U.K. termed a small group of countries “with shared values.” Sunak held a roundtable with politicians from the EU, the U.N., Italy, Germany, France and Australia.

    Announcing the expert panel on Thursday, Sunak said pioneering computer scientist Yoshua Bengio, dubbed one of the “godfathers” of AI, had agreed to chair production of its first report on the state of AI science.

    Sunak said likeminded governments and AI companies also had reached a “landmark agreement” to work together on testing the safety of AI models before they’re released to the public. Leading AI companies at the meeting including OpenAI, Google’s DeepMind, Anthropic and Inflection AI have agreed to “deepen access” to their frontier AI models, he said.

    Binding regulation for AI was not among the summit’s goals. Sunak said the U.K.’s approach should not be to rush into regulation but to fully understand AI first.

    Harris emphasized the U.S. administration’s more hands-on approach in a speech at the U.S. embassy on Wednesday, saying the world needs to act right away to address “the full spectrum” of AI risks, not just existential threats such as massive cyberattacks or AI-formulated bioweapons.

    She announced a new U.S. AI safety institute to draw up standards for testing AI models for public use. She said it would collaborate with a similar U.K. institute announced by Sunak days earlier.

    One of the Biden administration’s main concerns is that advances in AI are widening inequality within societies and between countries. As a step towards addressing that, Britain’s Foreign Secretary James Cleverly announced a $100 million fund, supported by the U.K., the U.S. and others, to help ensure African countries get a share of AI’s benefits – and that 46 African languages are fed into its models.

    Cleverly told reporters that it’s crucial there is a “diversity of voice” informing AI.

    “If it was just Euro-Atlantic and China, we would miss stuff, potentially huge amounts of stuff,” he said.

    Sunak capped the summit with a cozy onstage chat with Tesla CEO Elon Musk at a business reception in London’s grand Lancaster House. Musk is among tech executives who have warned that AI could pose a risk to humanity’s future.

    “Here we are for the first time, really in human history, with something that is going to be far more intelligent than us,” Musk said at the summit. “It’s not clear to me if we can control such a thing.”

    The conversation with Sunak — streamed after it happened on the Musk-owned social network X — ranged over topics from whether AI would remove the need for work to the need to have an off-switch for humanoid robots that could turn on their makers.

    Musk likened AI to “a magic genie” that could grant all wishes, but noted that those fairytales rarely end well.

    “One of the future challenges is how do you find meaning in life?” he said.

    The pair did not take questions from journalists.

    Sunak said earlier that it was important not to be “alarmist” about the technology, which could bring huge benefits.

    “But there is a case to believe that it may pose a risk on a scale like pandemics and nuclear war, and that’s why, as leaders, we have a responsibility to act to take the steps to protect people, and that’s exactly what we’re doing,” he said.

    Source link

  • AI could spark the next financial crisis, SEC Chair Gary Gensler says

    AI could spark the next financial crisis, SEC Chair Gary Gensler says

    Securities and Exchange Commission Chair Gary Gensler has plenty to worry about as he seeks to bring order and fairness to America’s $100 trillion capital markets, and there are few issues that cause him more concern than the spread of artificial-intelligence technology. 

    In an exclusive interview with MarketWatch, the regulator argued that generative AI technologies in the vein of ChatGPT have the potential to revolutionize the way we invest by leveraging large data sets to “predict things that were unimaginable even 10 years ago,” but that these new powers will come with great risks. 

    “A growing issue is that [AI] could lead to a risk in the whole system,” Gensler said. “As many financial actors rely on one or just two or three models in the middle … you create a monoculture, you create herding.” 

    Gary Gensler: AI could pose ‘a risk in the whole system.’

    This herding effect can be dangerous if there is a flaw in the model that might reverberate through markets during a time of stress, causing abrupt and unpredictable price changes in markets. Gensler pointed to the examples of cloud computing and search engines as markets for tech products that have quickly become dominated by one or two major players, and he said he worries about similar concentration in the market for AI technology.

    The regulator said this issue is especially difficult because of the fragmented nature of the U.S. regulatory apparatus, which relies on the SEC to oversee securities markets while other agencies have responsibility for banks or commodity markets. 

    “This is more of a cross-entity issue,” Gensler said. “That’s the challenge for these new technologies.”

    As SEC chair, Gensler has escalated his regulatory agency’s crackdown on the cryptocurrency industry in 2023 by launching lawsuits against Binance and Coinbase, the two largest digital asset exchanges in the world by trading volume. The SEC alleges the two companies are operating unregistered securities exchanges in the U.S., but the companies say they are not running afoul of securities laws.

    Gensler is simultaneously pushing forward the most fundamental market-structure reform measures in a generation. Gensler lands on The MarketWatch 50 list of the most influential people in markets

    But AI is another issue that Gensler is starting to ring alarm bells over. There’s a little bit of irony because the promise of AI has largely been responsible for the S&P 500’s
    SPX
    gains in 2023. The SEC chair said that his agency is already contemplating new rules to regulate artificial intelligence. For example, the SEC proposed a rule this summer to address conflicts of interest associated with stock brokers and investment advisors that leverage algorithms to predict and guide investor decisions through their smartphone applications or web interfaces.

    The industry is pushing back on the proposal, arguing that existing rules are sufficient to prevent harm to investors and that a new rule would prevent brokers from using technology to create a better experience for clients. 

    Gensler said that the SEC benefits from such feedback, but still believes that regulators must be vigilant about the impact of these so-called predictive analytical tools. “If they do that to suggest a certain movie on a streaming app, okay,” he said. “But if they’re doing that about your financial help … we should address those conflicts.”

    Source link

  • Chinese tech giant Alibaba launches upgraded AI model to challenge Microsoft, Amazon

    Chinese tech giant Alibaba launches upgraded AI model to challenge Microsoft, Amazon

    An Alibaba Group sign is seen at the World Artificial Intelligence Conference in Shanghai, July 6, 2023.

    Aly Song | Reuters

    Alibaba on Tuesday launched the latest version of its artificial intelligence model, as the Chinese technology giant looks to compete with U.S. tech rivals such as Amazon and Microsoft.

    China’s biggest cloud computing and e-commerce player announced Tongyi Qianwen 2.0, its latest large language model (LLM). A LLM is trained on vast amounts of data and forms the basis for generative AI applications such as ChatGPT, which is developed by U.S. firm OpenAI.

    Alibaba called Tongyi Qianwen 2.0 a “substantial upgrade from its predecessor,” which was introduced in April.

    Tongyi Qianwen 2.0 “demonstrates remarkable capabilities in understanding complex instructions, copywriting, reasoning, memorizing, and preventing hallucinations,” Alibaba said in a press release. Hallucinations refer to AI that presents incorrect information.

    Alibaba also released AI models designed for applications in specific industries and uses — such as legal counselling and finance — as it angles in on businesses.

    The Hangzhou-headquartered company also announced the GenAI Service Platform, which lets companies build their own generative AI applications, using their own data. One of the fears that businesses have about public generative AI products like ChatGPT is that data could be accessed by third parties.

    Alibaba and other major cloud players are offering tools for companies to build their own generative AI products using their own data, which would protected by these providers as part of the service package.

    Microsoft’s Azure OpenAI Studio and Amazon Web Service’s Bedrock are two rival services.

    While Alibaba is the biggest cloud player by market share in China, the company is trying to catch up with the likes of Amazon and Microsoft overseas.

    Source link

  • Google Bard asked Bill Nye how AI can help avoid the end of the world. Here’s what ‘The Science Guy’ said

    Google Bard asked Bill Nye how AI can help avoid the end of the world. Here’s what ‘The Science Guy’ said

    You may not know this, but Bill Nye, “The Science Guy,” has professional experience overseeing new and potentially dangerous innovations. Before he became a celebrity science educator, Nye worked as an engineer at Boeing during a period of rapid changes in aviation control systems and the need to make sure that the outputs from new systems were understood. And going all the way back to the days of the steamship engine innovation, Nye says that “control theory” has always been a key to the introduction of new technology.

    It will be no different with artificial intelligence. While not an AI expert, Nye said the basic problem everyone should be concerned about with AI design is that we can understand what’s going into the computer systems, but we can’t be sure what is going to come out. Social media was an example of how this problem already has played out in the technology sector.

    Speaking on Thursday at the CNBC Technology Executive Council Summit on AI in New York City, Nye said that the rapid rise of AI means “everyone in middle school all the way through to getting a PhD. in comp sci will have to learn about AI.”

    But he isn’t worried about the impact of the tech on students, referencing the “outrage” surrounding the calculator. “Teachers got used to them; everyone has to take tests with calculators,” he said. “This is just what’s going to be. … It’s the beginning, or rudiments, of computer programming.”

    More important in making people who are not computer literate understand and accept AI is good design in education. “Everyone already counts on their phone to tell them what side of the street they are on,” Nye said. “Good engineering invites right use. People throw around ‘user-friendly’ but I say ‘user figure- outtable.’”

    Overall, Nye seems more worried about students not becoming well-rounded in their analytical skills than personally thinking AI is going to wipe out humanity. And to make sure the risk of the latter can be minimized, he says we need to focus on the former in education. Computer science may become essential learning, but underlying his belief that “the universe is knowable,” Nye said that the most fundamental skill children need to learn is critical thinking. It will play a big role in AI, he says, due to both its complexity and its susceptibility to misuse, such as deep fakes. “We want people to be able to question. We don’t want a smaller and smaller fraction of people understanding a more complex world,” Nye said.

    During the conversation with CNBC’s Tyler Mathisen at the TEC Summit on AI, CNBC surprised Nye with a series of questions that came from a prompt given to the Google generative AI Bard: What should we ask Bill Nye about AI?

    Bard came up with about 20 questions covering a lot of ground:

    How should we ensure AI is used for good and not harm?

    “We need regulations,” Nye said. 

    What should we be teaching our children about AI?

    “How to write computer code.”

    What do you think about the chance for AI to surpass human intelligence?

    “It already does.”

    What is the most important ethical consideration for AI development?

    “That we need a class of legislators that can understand it well enough to create regulations to handle it, monitor it,” he said.

    What role can AI play in addressing some of the world’s most pressing problems such as climate change and poverty?

    Nye, who has spent a lot of time thinking about how the world may end — he still thinks giant solar flares are a bigger risk than AI which, he reminded the audience, “you can turn off” — said this was an “excellent question.”

    He gave his most expansive responses to the AI on this point.

    Watch the video above to see all of Bill Nye’s answers to the AI about how it can help save the world.

     

     

     

    Source link

  • Google adds generative AI threats to its bug bounty program | TechCrunch

    Google adds generative AI threats to its bug bounty program | TechCrunch

    Google has expanded its vulnerability rewards program (VRP) to include attack scenarios specific to generative AI.

    In an announcement shared with TechCrunch ahead of publication, Google said: “We believe expanding the VRP will incentivize research around AI safety and security and bring potential issues to light that will ultimately make AI safer for everyone,” 

    Google’s vulnerability rewards program (or bug bounty) pays ethical hackers for finding and responsibly disclosing security flaws. 

    Given that generative AI brings to light new security issues, such as the potential for unfair bias or model manipulation, Google said it sought to rethink how bugs it receives should be categorized and reported. 

    The tech giant says it’s doing this by using findings from its newly formed AI Red Team, a group of hackers that simulate a variety of adversaries, ranging from nation-states and government-backed groups to hacktivists and malicious insiders to hunt down security weaknesses in technology. The team recently conducted an exercise to determine the biggest threats to the technology behind generative AI products like ChatGPT and Google Bard.

    The team found that large language models (or LLMs) are vulnerable to prompt injection attacks, for example, whereby a hacker crafts adversarial prompts that can influence the behavior of the model. An attacker could use this type of attack to generate text that is harmful or offensive or to leak sensitive information. They also warned of another type of attack called training-data extraction, which allows hackers to reconstruct verbatim training examples to extract personally identifiable information or passwords from the data. 

    Both of these types of attacks are covered in the scope of Google’s expanded VRP, along with model manipulation and model theft attacks, but Google says it will not offer rewards to researchers who uncover bugs related to copyright issues or data extraction that reconstructs non-sensitive or public information.

    The monetary rewards will vary on the severity of the vulnerability discovered. Researchers can currently earn $31,337 if they find command injection attacks and deserialization bugs in highly sensitive applications, such as Google Search or Google Play. If the flaws affect apps that have a lower priority, the maximum reward is $5,000.

    Google says that it paid out more than $12 million in rewards to security researchers in 2022. 

    Carly Page

    Source link

  • Amazon’s new generative AI tool lets advertisers enhance product images | TechCrunch

    Amazon’s new generative AI tool lets advertisers enhance product images | TechCrunch

    Amazon is rolling out a new AI image generation tool for advertisers to generate backgrounds based on product descriptions and themes. Amazon is currently beta testing the tool with select advertisers and will expand availability “over time,” the company says.

    To use the tool, advertisers upload a photo, type an image description describing a background they want, select a theme and then click “Generate.” Advertisers can also refine the image by entering another text prompt. Then, it allows them to test multiple versions in order to optimize performance.

    The e-commerce giant uses an image of a toaster as an example, featuring a kitchen table decorated for autumn adorned with fall leaves and a bright orange pumpkin. The beta tool isn’t perfect, obviously. You may notice that the image also features a not-so-normal fork in the lower-right corner. The backdrop looks convincing enough, though.

    Image Credits: Amazon

    Many brands are beginning to look to generative AI to help simplify the process of creating an ad—which can be costly and time-consuming. Even large companies like Nestle and Unilever have reportedly admitted to using software like ChatGPT and DALL-E, per Reuters.

    “Producing engaging and differentiated creatives can increase cost and often requires introducing additional expertise into the advertising process,” Colleen Aubrey, senior vice president of Amazon Ads products and technology, said in a statement. “At Amazon Ads, we are always thinking about ways we can reduce friction for our advertisers, provide them with tools that deliver more impact while minimizing effort, and ultimately, deliver a better advertising experience for our customers.”

    Amazon hopes its new feature can help brands improve their ads’ performance. Rather than a standard product image with a boring white background, advertisers can place their product in a lifestyle context that tells a creative story. As a result, Amazon’s new tool could increase click-through rates by 40%, according to the company.

    Amazon has ramped up its generative AI efforts in recent months. For instance, the company introduced a tool to help sellers write product descriptions. It also leverages the technology to summarize customer reviews.

    Lauren Forristal

    Source link