ReportWire

Tag: generative ai

  • Meta CEO Mark Zuckerberg touts to employees ‘incredible breakthroughs’ the company has seen in A.I.

    Meta CEO Mark Zuckerberg touts to employees ‘incredible breakthroughs’ the company has seen in A.I.

    [ad_1]

    Mark Zuckerberg, CEO, Meta Platforms, in July 2021.

    Kevin Dietsch | Getty Images News | Getty Images

    Meta CEO Mark Zuckerberg wants his workforce to know the company is in the middle of the artificial intelligence race.

    During a meeting with employees Thursday in the Hacker Square pavilion at Meta’s Menlo Park headquarters, Zuckerberg discussed Meta’s AI efforts, a spokesperson confirmed. It was the first event held there since before the Covid-19 pandemic.

    related investing news

    CNBC Investing Club

    Zuckerberg addressed Meta’s recent layoffs at the beginning of the gathering but focused mostly on the company’s projects in the burgeoning field of generative AI, which uses written prompts to create conversational text and compelling visuals.

    “In the last year, we’ve seen some really incredible breakthroughs — qualitative breakthroughs — on generative AI and that gives us the opportunity to now go take that technology, push it forward, and build it into every single one of our products,” Zuckerberg said, according to a statement shared with CNBC. “We’re going to play an important and unique role in the industry in bringing these capabilities to billions of people in new ways that other people aren’t going to do.”

    Axios first reported on the meeting and the AI projects Meta is pursuing.

    While Meta has long touted its investments in AI, the company hasn’t been at the center of the conversation regarding the latest consumer applications, which have come from Microsoft-backed OpenAI, Google and Microsoft itself.

    At the meeting Thursday, Zuckerberg and other Meta executives detailed some of the company’s work incorporating generative AI models into the metaverse, the nascent virtual world Meta is sinking billions of dollars into every quarter to try and make a reality. In particular, they talked about how AI can help create the 3D visuals for the metaverse.

    Meta said it’s giving employees access to several internal generative AI tools to help develop prototypes, and the company is hosting a hackathon for workers to show off their AI projects.

    The company also plans to debut a service for Instagram users that will let them modify photos via text prompts and share them in the app’s Stories feature.

    Additionally, Meta plans for its Messenger and WhatsApp services to eventually include the ability for users to engage with more sophisticated AI-powered chatbots as a form of entertainment.

    Meta executives told employees the company is still committed to releasing AI research to the open-source community. However, they didn’t address a recent letter from Sens. Richard Blumenthal, D-CT, and Josh Hawley, R-MO, expressing concern over a public leak of the company’s LLaMA language model and the “the potential for its misuse in spam, fraud, malware, privacy violations, harassment and other wrongdoing and harms.”

    Last week, Meta told employees they will need to work at the company’s offices three days a week, starting in September. Amazon and Google have also altered their previous work-from-home policies in recent months.

    WATCH: Meta has a lot of work to do before its VR headset becomes mainstream

    Meta has a lot of work to do before its VR headset becomes mainstream: Jefferies' Brent Thill

    [ad_2]

    Source link

  • A.I. doomers are a ‘cult’ — here’s the real threat, according to Marc Andreessen

    A.I. doomers are a ‘cult’ — here’s the real threat, according to Marc Andreessen

    [ad_1]

    Andreessen Horowitz partner Marc Andreessen

    Justin Sullivan | Getty Images

    Venture capitalist Marc Andreessen is known for saying that “software is eating the world.” When it comes to artificial intelligence, he claims people should stop worrying and build, build, build.

    On Tuesday, Andreessen published a nearly 7,000-word missive on his views on AI, the risks it poses and the regulation he believes it requires. In trying to counteract all the recent talk of “AI doomerism,” he presents what could be seen as an overly idealistic perspective of the implications.

    ‘Doesn’t want to kill you’

    Andreessen starts off with an accurate take on AI, or machine learning, calling it “the application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it.”

    AI isn’t sentient, he says, despite the fact that its ability to mimic human language can understandably fool some into believing otherwise. It’s trained on human language and finds high-level patterns in that data. 

    “AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive,” he wrote. “And AI is a machine – is not going to come alive any more than your toaster will.”

    Andreessen writes that there’s a “wall of fear-mongering and doomerism” in the AI world right now. Without naming names, he’s likely referring to claims from high-profile tech leaders that the technology poses an existential threat to humanity. Last week, Microsoft founder Bill Gates, OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis and others signed a letter from the Center for AI Safety about “the risk of extinction from AI.”

    We're in the early stage of the A.I. hype cycle, says venture capital fund

    Tech CEOs are motivated to promote such doomsday views because they “stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition,” Andreessen wrote.  

    Many AI researchers and ethicists have also criticized the doomsday narrative. One argument is that too much focus on AI’s growing power and its future threats distracts from real-life harms that some algorithms cause to marginalized communities right now, rather than in an unspecified future.

    But that’s where most of the similarities between Andreessen and the researchers end. Andreessen writes that people in roles like AI safety expert, AI ethicist and AI risk researcher “are paid to be doomers, and their statements should be processed appropriately,” he wrote. In actuality, many leaders in the AI research, ethics and trust and safety community have voiced clear opposition to the doomer agenda and instead focus on mitigating today’s documented risks of the technology.

    Instead of acknowledging any documented real-life risks of AI – its biases can infect facial recognition systems, bail decisions, criminal justice proceedings, mortgage approval algorithms and more – Andreessen claims AI could be “a way to make everything we care about better.” 

    He argues that AI has huge potential for productivity, scientific breakthroughs, creative arts and reducing wartime death rates.

    “Anything that people do with their natural intelligence today can be done much better with AI,” he wrote. “And we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.” 

    From doomerism to idealism

    Though AI has made significant strides in many areas, such as vaccine development and chatbot services, the technology’s documented harms has led many experts to conclude that, for certain applications, it should never be used.

    Andreessen describes these fears as irrational “moral panic.” He also promotes reverting to the tech industry’s “move fast and break things” approach of yesteryear, writing that both big AI companies and startups “should be allowed to build AI as fast and aggressively as they can” and that the tech “will accelerate very quickly from here – if we let it.” 

    Andreessen, who gained prominence in the 1990s for developing the first popular internet browser, started his venture firm with Ben Horowitz in 2009. Two years later, he wrote an oft-cited blog post titled “Why software is eating the world,” which said that health care and education were due for “fundamental software-based transformation” just as so many industries before them.

    Eating the world is exactly what many people fear when it comes to AI. Beyond just trying to tamp down those concerns, Andreessen says there’s work to be done. He encourages the controversial use of AI itself to protect people against AI bias and harms.

    “Governments working in partnership with the private sector should vigorously engage in each area of potential risk to use AI to maximize society’s defensive capabilities,” he said.  

    In Andreessen’s own idealist future, “every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful.” He expresses similar visions for AI’s role as a partner and collaborator for every person, scientist, teacher, CEO, government leader and even military commander. 

    Is China the real threat?

    Near the end of his post, Andreessen points out what he calls “the actual risk of not pursuing AI with maximum force and speed.”

    That risk, he says, is China, which is developing AI quickly and with highly concerning authoritarian applications.  According to years of documented cases, the Chinese government leans on surveillance AI, such as using facial recognition and phone GPS data to track and identify protesters

    To head off the spread of China’s AI influence, Andreessen writes, “We should drive AI into our economy and society as fast and hard as we possibly can.”

    He then offers a plan for aggressive AI development on behalf of big tech companies and startups and using the “full power of our private sector, our scientific establishment, and our governments.”

    Andreessen writes with a level of certainty about where the world is headed, but he’s not always great at predicting what’s coming.

    His firm launched a $2.2 billion crypto fund in mid-2021, shortly before the industry began to crater. And one of its big bets during the pandemic was on social audio startup Clubhouse, which soared to a $4 billion valuation while people were stuck at home looking for alternative forms of entertainment. In April, Clubhouse said it’s laying off half its staff in order to “reset” the company.

    Throughout Andreessen’s essay, he calls out the ulterior motives that others have when it comes to publicly expressing their views on AI. But he has his own. He wants to make money on the AI revolution, and is investing in startups with that goal in mind.

    “I do not believe they are reckless or villains,” he concluded in his post. “They are heroes, every one. My firm and I are thrilled to back as many of them as we can, and we will stand alongside them and their work 100%.”

    WATCH: CNBC’s interview with Altimeter Capital’s Brad Gerstner

    Watch CNBC’s full interview with Altimeter Capital founder Brad Gerstner on A.I. risks

    [ad_2]

    Source link

  • A video game developer with nearly 40% upside because of A.I. opportunity, according to Bernstein

    A video game developer with nearly 40% upside because of A.I. opportunity, according to Bernstein

    [ad_1]

    [ad_2]

    Source link

  • Is it real or made by AI? Europe wants a label for that as it fights disinformation

    Is it real or made by AI? Europe wants a label for that as it fights disinformation

    [ad_1]

    LONDON — The European Union is pushing online platforms like Google and Meta to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence, a top official said Monday.

    EU Commission Vice President Vera Jourova said the ability of a new generation of AI chatbots to create complex content and visuals in seconds raises “fresh challenges for the fight against disinformation.”

    Jourova said she asked Google, Meta, Microsoft, TikTok and other tech companies that have signed up to the 27-nation bloc’s voluntary agreement on combating disinformation to dedicate efforts to tackling the AI problem.

    Online platforms that have integrated generative AI into their services, such as Microsoft’s Bing search engine and Google’s Bard chatbot, should build safeguards to prevent “malicious actors” from generating disinformation, Jourova said at a briefing in Brussels.

    Companies offering services that have the potential to spread AI-generated disinformation should roll out technology to “recognize such content and clearly label this to users,” she said.

    Jourova said EU regulations are aimed at protecting free speech, but when it comes to AI, “I don’t see any right for the machines to have the freedom of speech.”

    The swift rise of generative AI technology, which has the capability to produce human-like text, images and video, has amazed many and alarmed others with its potential to transform many aspects of daily life. Europe has taken a lead role in the global movement to regulate artificial intelligence with its AI Act, but the legislation still needs final approval and won’t take effect for several years.

    Officials in the EU, which is bringing in a separate set of rules this year to safeguard people from harmful online content, are worried that they need to act faster to keep up with the rapid development of generative artificial intelligence.

    The voluntary commitments in the disinformation code will soon become legal obligations under the EU’s Digital Services Act, which will force the biggest tech companies by the end of August to better police their platforms to protect users from hate speech, disinformation and other harmful material.

    Jourova said, however, that those companies should start labeling AI-generated content immediately.

    Most of those digital giants are already signed up to the EU code, which requires companies to measure their work on combating disinformation and issue regular reports on their progress.

    Twitter dropped out last month in what appeared to be the latest move by Elon Musk to loosen restrictions at the social media company after he bought it last year.

    The exit drew a stern rebuke, with Jourova calling it a mistake.

    “Twitter has chosen the hard way. They chose confrontation,” she said. “Make no mistake, by leaving the code, Twitter has attracted a lot of attention and its actions and compliance with EU law will be scrutinized vigorously and urgently.”

    [ad_2]

    Source link

  • OpenAI boss ‘heartened’ by talks with world leaders over will to contain AI risks

    OpenAI boss ‘heartened’ by talks with world leaders over will to contain AI risks

    [ad_1]

    The CEO of OpenAI says he is encouraged by a desire shown by world leaders to contain any risks posed by the artificial intelligence technology his company and others are developing

    ByTIA GOLDENBERG Associated Press

    FILE – OpenAI’s CEO Sam Altman gestures while speaking at University College London as part of his world tour of speaking engagements in London, on May 24, 2023. Altman said Monday, June 5, 2023 he was encouraged by a desire shown by world leaders to contain any risks posed by the artificial intelligence technology his company and others are developing. (AP Photo/Alastair Grant, File)

    The Associated Press

    TEL AVIV, Israel — OpenAI CEO Sam Altman said Monday he was encouraged by a desire shown by world leaders to contain any risks posed by the artificial intelligence technology his company and others are developing.

    Altman visited Tel Aviv, a tech powerhouse, as part of a world tour that has so far taken him to several European capitals. Altman’s tour is meant to promote his company, the maker of ChatGPT — the popular AI chatbox — which has unleashed a frenzy around the globe.

    “I am very heartened as I’ve been doing this trip around the world, getting to meet world leaders,” Altman said during a visit with Israel’s ceremonial President Isaac Herzog. Altman said his discussions showed “the thoughtfulness” and “urgency” among world leaders over how to figure out how to “mitigate these very huge risks.”

    The world tour comes after hundreds of scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a warning about the perils that artificial intelligence poses to humankind. Altman was also a signatory.

    Worries about artificial intelligence systems outsmarting humans and running wild have intensified with the rise of a new generation of highly capable AI chatbots. Countries around the world are scrambling to come up with regulations for the developing technology, with the European Union blazing the trail with its AI Act expected to be approved later this year.

    “With the great opportunities of this incredible technology, there are also many risks to humanity and to the independence of human beings in the future,” Herzog told Altman. “We have to make sure that this development is used for the wellness of humanity.”

    Israel has emerged in recent years as a tech leader, with the industry producing some noteworthy technology used across the globe.

    Among its more controversial exports has been Pegasus, a powerful and sophisticated spyware product by the Israeli company NSO, which critics say has been used by authoritarian countries to spy on activists and dissidents. The Israeli military also has begun using artificial intelligence for certain tasks, including crowd control procedures.

    Altman has met with world leaders including British Prime Minister Rishi Sunak, French President Emmanuel Macron, Spanish Prime Minister Pedro Sanchez and German Chancellor Olaf Scholz.

    Altman tweeted that he heads to Jordan, Qatar, the United Arab Emirates, India, and South Korea this week.

    [ad_2]

    Source link

  • ‘Not just a fad’: Firm launches fund designed to capitalize on A.I. boom

    ‘Not just a fad’: Firm launches fund designed to capitalize on A.I. boom

    [ad_1]

    A major ETF firm provider is betting the artificial intelligence boom is just starting.

    Roundhill Investments launched the Generative AI & Technology ETF (CHAT) less than 20 days ago. It’s the first-ever exchange-traded fund designed to track companies involved in generative AI and other related technologies.

    “These companies, we believe, are not just a fad. They’re powering something that could be as ubiquitous as the internet itself,” the firm’s chief strategy officer, Dave Mazza, told “ETF Edge” this week. “We’re not talking about hopes and dreams [or] some theme or fad that could happen 30 years in the future which may change the world.”

    Mazza notes the fund includes not just pure play AI companies like C3.ai but also large-cap tech companies such as Microsoft and AI chipmaker Nvidia.

    Nvidia is the fund’s top holding at 8%, according to the company website. Its shares are up almost 42% over the past two months. Since the beginning of the year, Nvidia stock has soared 169%.

    “This [AI] is an area that’s going to get a lot of attention,” said Mazza.

    His bullish forecast comes amid concerns AI is a price bubble that will pop and take down the Big Tech rally.

    In a recent interview on CNBC’s “Fast Money,” Richard Bernstein Advisors’ Dan Suzuki — a Big Tech bear since June 2021 — compared the AI rally to the dot-com bubble in the late 1990s.

    “People jump from narrative to narrative,” the firm’s deputy chief investment officer said on Wednesday. “I love the technology. I think the applications will be huge. That doesn’t mean it’s a good investment.”

    The CHAT ETF is up more than 8% since it started trading on May 18.

    [ad_2]

    Source link

  • AI chips are hot. Here’s what they are, what they’re for and why investors see gold

    AI chips are hot. Here’s what they are, what they’re for and why investors see gold

    [ad_1]

    SAN FRANCISCO — SAN FRANCISCO (AP) —

    The hottest thing in technology is an unprepossessing sliver of silicon closely related to the chips that power video game graphics. It’s an artificial intelligence chip, designed specifically to make building AI systems such as ChatGPT faster and cheaper.

    Such chips have suddenly taken center stage in what some experts consider an AI revolution that could reshape the technology sector — and possibly the world along with it. Shares of Nvidia, the leading designer of AI chips, rocketed up almost 25% last Thursday after the company forecast a huge jump in revenue that analysts said indicated soaring sales of its products. The company was briefly worth more than $1 trillion on Tuesday.

    SO WHAT ARE AI CHIPS, ANYWAY?

    That isn’t an easy question to answer. “There really isn’t a completely agreed upon definition of AI chips,” said Hannah Dohmen, a research analyst with the Center for Security and Emerging Technology.

    In general, though, the term encompasses computing hardware that’s specialized to handle AI workloads — for instance, by “training” AI systems to tackle difficult problems that can choke conventional computers.

    VIDEO GAME ORIGINS

    Three entrepreneurs founded Nvidia in 1993 to push the boundaries of computational graphics. Within a few years, the company had developed a new chip called a graphics processing unit, or GPU, which dramatically sped up both development and play of video games by performing multiple complex graphics calculations at once.

    That technique, known formally as parallel processing, would prove key to the development of both games and AI. Two graduate students at the University of Toronto used a GPU-based neural network to win a prestigious 2012 AI competition called ImageNet by identifying photo images at much lower error rates than competitors.

    The win kick-started interest in AI-related parallel processing, opening a new business opportunity for Nvidia and its rivals while providing researchers powerful tools for exploring the frontiers of AI development.

    MODERN AI CHIPS

    Eleven years later, Nvidia is the dominant supplier of chips for building and updating AI systems. One of its recent products, the H100 GPU, packs in 80 billion transistors — about 13 million more than Apple’s latest high-end processor for its MacBook Pro laptop. Unsurprisingly, this technology isn’t cheap; at one online retailer, the H100 lists for $30,000.

    Nvidia doesn’t fabricate these complex GPU chips itself, a task that would require enormous investments in new factories. Instead it relies on Asian chip foundries such as Taiwan Semiconductor Manufacturing Co. and Korea’s Samsung Electronics.

    Some of the biggest customers for AI chips are cloud-computing services such as those run by Amazon and Microsoft. By renting out their AI computing power, those services make it possible for smaller companies and groups that couldn’t afford to build their own AI systems from scratch to use cloud-based tools to help with tasks that can range from drug discovery to customer management.

    OTHER USES AND COMPETITION

    Parallel processing has many uses outside of AI. A few years ago, for instance, Nvidia graphics cards were in short supply because cryptocurrency miners, who set up banks of computers to solve thorny mathematical problems for bitcoin rewards, had snapped up most of them. That problem faded as the cryptocurrency market collapsed in early 2022.

    Analysts say Nvidia will inevitably face tougher competition. One potential rival is Advanced Micro Devices, which already faces off with Nvidia in the market for computer graphics chips. AMD has recently taken steps to bolster its own lineup of AI chips.

    Nvidia is based in Santa Clara, California. Co-founder Jensen Huang remains the company’s president and chief executive.

    [ad_2]

    Source link

  • AI chips are hot. Here’s what they are, what they’re for and why investors see gold

    AI chips are hot. Here’s what they are, what they’re for and why investors see gold

    [ad_1]

    SAN FRANCISCO — SAN FRANCISCO (AP) —

    The hottest thing in technology is an unprepossessing sliver of silicon closely related to the chips that power video game graphics. It’s an artificial intelligence chip, designed specifically to make building AI systems such as ChatGPT faster and cheaper.

    Such chips have suddenly taken center stage in what some experts consider an AI revolution that could reshape the technology sector — and possibly the world along with it. Shares of Nvidia, the leading designer of AI chips, rocketed up almost 25% last Thursday after the company forecast a huge jump in revenue that analysts said indicated soaring sales of its products. The company was briefly worth more than $1 trillion on Tuesday.

    SO WHAT ARE AI CHIPS, ANYWAY?

    That isn’t an easy question to answer. “There really isn’t a completely agreed upon definition of AI chips,” said Hannah Dohmen, a research analyst with the Center for Security and Emerging Technology.

    In general, though, the term encompasses computing hardware that’s specialized to handle AI workloads — for instance, by “training” AI systems to tackle difficult problems that can choke conventional computers.

    VIDEO GAME ORIGINS

    Three entrepreneurs founded Nvidia in 1993 to push the boundaries of computational graphics. Within a few years, the company had developed a new chip called a graphics processing unit, or GPU, which dramatically sped up both development and play of video games by performing multiple complex graphics calculations at once.

    That technique, known formally as parallel processing, would prove key to the development of both games and AI. Two graduate students at the University of Toronto used a GPU-based neural network to win a prestigious 2012 AI competition called ImageNet by identifying photo images at much lower error rates than competitors.

    The win kick-started interest in AI-related parallel processing, opening a new business opportunity for Nvidia and its rivals while providing researchers powerful tools for exploring the frontiers of AI development.

    MODERN AI CHIPS

    Eleven years later, Nvidia is the dominant supplier of chips for building and updating AI systems. One of its recent products, the H100 GPU, packs in 80 billion transistors — about 13 million more than Apple’s latest high-end processor for its MacBook Pro laptop. Unsurprisingly, this technology isn’t cheap; at one online retailer, the H100 lists for $30,000.

    Nvidia doesn’t fabricate these complex GPU chips itself, a task that would require enormous investments in new factories. Instead it relies on Asian chip foundries such as Taiwan Semiconductor Manufacturing Co. and Korea’s Samsung Electronics.

    Some of the biggest customers for AI chips are cloud-computing services such as those run by Amazon and Microsoft. By renting out their AI computing power, those services make it possible for smaller companies and groups that couldn’t afford to build their own AI systems from scratch to use cloud-based tools to help with tasks that can range from drug discovery to customer management.

    OTHER USES AND COMPETITION

    Parallel processing has many uses outside of AI. A few years ago, for instance, Nvidia graphics cards were in short supply because cryptocurrency miners, who set up banks of computers to solve thorny mathematical problems for bitcoin rewards, had snapped up most of them. That problem faded as the cryptocurrency market collapsed in early 2022.

    Analysts say Nvidia will inevitably face tougher competition. One potential rival is Advanced Micro Devices, which already faces off with Nvidia in the market for computer graphics chips. AMD has recently taken steps to bolster its own lineup of AI chips.

    Nvidia is based in Santa Clara, California. Co-founder Jensen Huang remains the company’s president and chief executive.

    [ad_2]

    Source link

  • AI chips are hot. Here’s what they are, what they’re for and why investors see gold

    AI chips are hot. Here’s what they are, what they’re for and why investors see gold

    [ad_1]

    SAN FRANCISCO — SAN FRANCISCO (AP) —

    The hottest thing in technology is an unprepossessing sliver of silicon closely related to the chips that power video game graphics. It’s an artificial intelligence chip, designed specifically to make building AI systems such as ChatGPT faster and cheaper.

    Such chips have suddenly taken center stage in what some experts consider an AI revolution that could reshape the technology sector — and possibly the world along with it. Shares of Nvidia, the leading designer of AI chips, rocketed up almost 25% last Thursday after the company forecast a huge jump in revenue that analysts said indicated soaring sales of its products. The company was briefly worth more than $1 trillion on Tuesday.

    SO WHAT ARE AI CHIPS, ANYWAY?

    That isn’t an easy question to answer. “There really isn’t a completely agreed upon definition of AI chips,” said Hannah Dohmen, a research analyst with the Center for Security and Emerging Technology.

    In general, though, the term encompasses computing hardware that’s specialized to handle AI workloads — for instance, by “training” AI systems to tackle difficult problems that can choke conventional computers.

    VIDEO GAME ORIGINS

    Three entrepreneurs founded Nvidia in 1993 to push the boundaries of computational graphics. Within a few years, the company had developed a new chip called a graphics processing unit, or GPU, which dramatically sped up both development and play of video games by performing multiple complex graphics calculations at once.

    That technique, known formally as parallel processing, would prove key to the development of both games and AI. Two graduate students at the University of Toronto used a GPU-based neural network to win a prestigious 2012 AI competition called ImageNet by identifying photo images at much lower error rates than competitors.

    The win kick-started interest in AI-related parallel processing, opening a new business opportunity for Nvidia and its rivals while providing researchers powerful tools for exploring the frontiers of AI development.

    MODERN AI CHIPS

    Eleven years later, Nvidia is the dominant supplier of chips for building and updating AI systems. One of its recent products, the H100 GPU, packs in 80 billion transistors — about 13 million more than Apple’s latest high-end processor for its MacBook Pro laptop. Unsurprisingly, this technology isn’t cheap; at one online retailer, the H100 lists for $30,000.

    Nvidia doesn’t fabricate these complex GPU chips itself, a task that would require enormous investments in new factories. Instead it relies on Asian chip foundries such as Taiwan Semiconductor Manufacturing Co. and Korea’s Samsung Electronics.

    Some of the biggest customers for AI chips are cloud-computing services such as those run by Amazon and Microsoft. By renting out their AI computing power, those services make it possible for smaller companies and groups that couldn’t afford to build their own AI systems from scratch to use cloud-based tools to help with tasks that can range from drug discovery to customer management.

    OTHER USES AND COMPETITION

    Parallel processing has many uses outside of AI. A few years ago, for instance, Nvidia graphics cards were in short supply because cryptocurrency miners, who set up banks of computers to solve thorny mathematical problems for bitcoin rewards, had snapped up most of them. That problem faded as the cryptocurrency market collapsed in early 2022.

    Analysts say Nvidia will inevitably face tougher competition. One potential rival is Advanced Micro Devices, which already faces off with Nvidia in the market for computer graphics chips. AMD has recently taken steps to bolster its own lineup of AI chips.

    Nvidia is based in Santa Clara, California. Co-founder Jensen Huang remains the company’s president and chief executive.

    [ad_2]

    Source link

  • Bank of America ranks the biggest A.I. winners in software stocks. Here are its top picks

    Bank of America ranks the biggest A.I. winners in software stocks. Here are its top picks

    [ad_1]

    [ad_2]

    Source link

  • China warns of artificial intelligence risks, calls for beefed-up national security measures

    China warns of artificial intelligence risks, calls for beefed-up national security measures

    [ad_1]

    BEIJING — China’s ruling Communist Party has warned of the risks posed by advances in artificial intelligence while calling for heightened national security measures.

    A meeting headed by party leader and President Xi Jinping on Tuesday urged “dedicated efforts to safeguard political security and improve the security governance of internet data and artificial intelligence,” the official Xinhua News Agency said.

    Xi, who is China‘s head of state, commander of the military and chair of the party’s National Security Commission, called at the meeting for “staying keenly aware of the complicated and challenging circumstances facing national security.”

    China needs a “new pattern of development with a new security architecture,” Xinhua reported Xi as saying.

    The statements from Beijing followed a warning Tuesday by scientists and tech industry leaders in the U.S., including high-level executives at Microsoft and Google, about the perils that artificial intelligence poses to humankind.

    “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said.

    China already dedicates vast resources to suppressing any perceived political threats to the party’s dominance, with spending on the police and security personnel exceeding that devoted to the military.

    While it relentlessly censors in-person protests and online criticism, citizens have continued to express dissatisfaction with policies, most recently the draconian lockdown measures enacted to combat the spread of COVID-19.

    China has been cracking down on its tech sector in an effort to reassert party control, but like other countries it is scrambling to find ways to regulate the developing technology.

    The most recent party meeting reinforced the need to “assess the potential risks, take precautions, safeguard the people’s interests and national security, and ensure the safety, reliability and ability to control AI,” the official newspaper Beijing Youth Daily reported Tuesday.

    Worries about artificial intelligence systems outsmarting humans and slipping out of control have intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT.

    Sam Altman, CEO of ChatGPT-maker OpenAI, and Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligence, were among the hundreds of leading figures who signed the statement on Tuesday that was posted on the Center for AI Safety’s website.

    More than 1,000 researchers and technologists, including Elon Musk, who is currently on a visit to China, had signed a much longer letter earlier this year calling for a six-month pause on AI development.

    The missive said AI poses “profound risks to society and humanity,” and some involved in the topic have proposed a United Nations treaty to regulate the technology.

    China warned as far back as 2018 of the need to regulate AI, but has nonetheless funded a vast expansion in the field as part of efforts to seize the high ground on cutting-edge technologies.

    A lack of privacy protections and strict party control over the legal system have also resulted in near-blanket usage of facial, voice and even walking-gait recognition technology to identify and detain those seen as threatening, such as political dissenters and religious minorities, especially Muslims.

    Members of the Uyghur and other mainly Muslim ethnic groups have been singled out for mass electronic monitoring and more than 1 million people have been detained in prison-like political re-education camps that China calls deradicalization and job training centers.

    AI’s risks are seen mainly in its ability to control robotic, self-governing weaponry, financial tools and computers governing power grids, health centers, transportation networks and other key infrastructure.

    China’s unbridled enthusiasm for new technology and willingness to tinker with imported or stolen research and to stifle inquiries into major events such as the COVID-19 outbreak heighten concerns over its use of AI.

    “China’s blithe attitude toward technological risk, the government’s reckless ambition, and Beijing’s crisis mismanagement are all on a collision course with the escalating dangers of AI,” technology and national security scholars Bill Drexel and Hannah Kelley wrote in an article published this week in the journal Foreign Affairs.

    [ad_2]

    Source link

  • OpenAI boss downplays fears ChatGPT maker could leave Europe over AI rules

    OpenAI boss downplays fears ChatGPT maker could leave Europe over AI rules

    [ad_1]

    LONDON (AP) — OpenAI CEO Sam Altman on Friday downplayed worries that the ChatGPT maker could exit the European Union if it can’t comply with the bloc’s strict new artificial intelligence rules, coming after a top official rebuked him for comments raising such a possibility.

    Altman is traveling through Europe as part of a world tour to meet with officials and promote his AI company, which has unleashed a frenzy around the globe.

    At a stop this week in London, he said OpenAI might leave if the artificial intelligence rules that the EU is drawing up are too tough. That triggered a pointed reply on social media from European Commissioner Thierry Breton, accusing the company of blackmail.

    Breton, who’s in charge of digital policy, linked to a Financial Times article quoting Altman saying that OpenAI “will try to comply, but if we can’t comply we will cease operating.”

    Altman sought to calm the waters a day later, tweeting: “very productive week of conversations in europe about how to best regulate AI! we are excited to continue to operate here and of course have no plans to leave.”

    The European Union is at the forefront of global efforts to draw up guardrails for artificial intelligence, with its AI Act in the final stages after years of work. The rapid rise of general purpose AI chatbots like ChatGPT caught EU officials off guard, and they scrambled to add provisions covering so-called generative AI systems, which can produce convincingly human-like conversational answers, essays, images and more in response to questions from users.

    “There is no point in attempting blackmail — claiming that by crafting a clear framework, Europe is holding up the rollout of generative #AI,” Breton said in his tweet. He added that the EU aims to “assist companies in their preparation” for the AI Act.

    Altman tweeted that his European tour includes Warsaw, Poland; Munich, Germany; Paris; Madrid; Lisbon, Portugal; and London. Brussels, headquarters of the EU, has not been mentioned.

    He has met with world leaders including British Prime Minister Rishi Sunak, French President Emmanuel Macron, Spanish Prime Minister Pedro Sanchez and German Chancellor Olaf Scholz.

    Google CEO Sundar Pichai also has been crisscrossing Europe this week to discuss AI with officials like Scholz, European commissioners including Breton, Swedish Prime Minister Ulf Kristersson, and two EU lawmakers who spearheaded the Parliament’s work on the AI rules.

    “Good to discuss the need for responsible regulation and transatlantic convergence on AI,” Pichai tweeted.

    Google has released its own conversational chatbot, Bard, to compete with ChatGPT.

    Other tech company bosses have been wading into the debate this week over whether and how to regulate artificial intelligence, including Microsoft President Brad Smith, who unveiled a blueprint for public governance of AI on Thursday.

    Microsoft has invested billions in OpenAI and integrated ChatGPT-like technology into its products, including a chatbot for its Bing search engine.

    Altman told congressional lawmakers this month that AI should be regulated by a U.S. or global agency because increasingly powerful systems will need government intervention to reduce their risks.

    Altman was mobbed by students when he appeared in a “fireside chat” at University College London on Wednesday. He told the audience that the “right answer” to regulating AI is “probably something between the traditional European, U.K. approach and the traditional U.S. approach.”

    “I think you really don’t want to overregulate this before you know what shape the technology is going to be,” Altman said.

    There’s still potential to come up with “some sort of global set of norms and enforcement,” he said, adding that AI regulation has been a “recurring topic” on his world tour, which has also included stops in Toronto, Rio de Janeiro and Lagos, Nigeria.

    [ad_2]

    Source link

  • Regulators take aim at AI to protect consumers and workers

    Regulators take aim at AI to protect consumers and workers

    [ad_1]

    NEW YORK (AP) — As concerns grow over increasingly powerful artificial intelligence systems like ChatGPT, the nation’s financial watchdog says it’s working to ensure that companies follow the law when they’re using AI.

    Already, automated systems and algorithms help determine credit ratings, loan terms, bank account fees, and other aspects of our financial lives. AI also affects hiring, housing and working conditions.

    Ben Winters, Senior Counsel for the Electronic Privacy Information Center, said a joint statement on enforcement released by federal agencies last month was a positive first step.

    “There’s this narrative that AI is entirely unregulated, which is not really true,” he said. “They’re saying, ‘Just because you use AI to make a decision, that doesn’t mean you’re exempt from responsibility regarding the impacts of that decision. This is our opinion on this. We’re watching.’”

    In the past year, the Consumer Finance Protection Bureau said it has fined banks over mismanaged automated systems that resulted in wrongful home foreclosures, car repossessions, and lost benefit payments, after the institutions relied on new technology and faulty algorithms.

    There will be no “AI exemptions” to consumer protection, regulators say, pointing to these enforcement actions as examples.

    Consumer Finance Protection Bureau Director Rohit Chopra said the agency has “already started some work to continue to muscle up internally when it comes to bringing on board data scientists, technologists and others to make sure we can confront these challenges” and that the agency is continuing to identify potentially illegal activity.

    Representatives from the Federal Trade Commission, the Equal Employment Opportunity Commission, and the Department of Justice, as well as the CFPB, all say they’re directing resources and staff to take aim at new tech and identify negative ways it could affect consumers’ lives.

    “One of the things we’re trying to make crystal clear is that if companies don’t even understand how their AI is making decisions, they can’t really use it,” Chopra said. “In other cases, we’re looking at how our fair lending laws are being adhered to when it comes to the use of all of this data.”

    Under the Fair Credit Reporting Act and Equal Credit Opportunity Act, for example, financial providers have a legal obligation to explain any adverse credit decision. Those regulations likewise apply to decisions made about housing and employment. Where AI make decisions in ways that are too opaque to explain, regulators say the algorithms shouldn’t be used.

    “I think there was a sense that, ’Oh, let’s just give it to the robots and there will be no more discrimination,’” Chopra said. “I think the learning is that that actually isn’t true at all. In some ways the bias is built into the data.”

    EEOC Chair Charlotte Burrows said there will be enforcement against AI hiring technology that screens out job applicants with disabilities, for example, as well as so-called “bossware” that illegally surveils workers.

    Burrows also described ways that algorithms might dictate how and when employees can work in ways that would violate existing law.

    “If you need a break because you have a disability or perhaps you’re pregnant, you need a break,” she said. “The algorithm doesn’t necessarily take into account that accommodation. Those are things that we are looking closely at … I want to be clear that while we recognize that the technology is evolving, the underlying message here is the laws still apply and we do have tools to enforce.”

    OpenAI’s top lawyer, at a conference this month, suggested an industry-led approach to regulation.

    “I think it first starts with trying to get to some kind of standards,” Jason Kwon, OpenAI’s general counsel, told a tech summit in Washington, DC, hosted by software industry group BSA. “Those could start with industry standards and some sort of coalescing around that. And decisions about whether or not to make those compulsory, and also then what’s the process for updating them, those things are probably fertile ground for more conversation.”

    Sam Altman, the head of OpenAI, which makes ChatGPT, said government intervention “will be critical to mitigate the risks of increasingly powerful” AI systems, suggesting the formation of a U.S. or global agency to license and regulate the technology.

    While there’s no immediate sign that Congress will craft sweeping new AI rules, as European lawmakers are doing, societal concerns brought Altman and other tech CEOs to the White House this month to answer hard questions about the implications of these tools.

    Winters, of the Electronic Privacy Information Center, said the agencies could do more to study and publish information on the relevant AI markets, how the industry is working, who the biggest players are, and how the information collected is being used — the way regulators have done in the past with new consumer finance products and technologies.

    “The CFPB did a pretty good job on this with the ‘Buy Now, Pay Later’ companies,” he said. “There are so may parts of the AI ecosystem that are still so unknown. Publishing that information would go a long way.”

    ___

    Technology reporter Matt O’Brien contributed to this report.

    ___

    The Associated Press receives support from Charles Schwab Foundation for educational and explanatory reporting to improve financial literacy. The independent foundation is separate from Charles Schwab and Co. Inc. The AP is solely responsible for its journalism.

    [ad_2]

    Source link

  • White House unveils new efforts to guide federal research of AI

    White House unveils new efforts to guide federal research of AI

    [ad_1]

    The White House has announced new efforts to guide federally backed research on artificial intelligence

    ByAAMER MADHANI Associated Press

    FILE – President Joe Biden speaks in the East Room of the White House, May 17, 2023, in Washington. The White House has announced new efforts to guide federally backed research on artificial intelligence. The moves announced Tuesday come as the Biden administration is looking to get a firmer grip on understanding the risks and opportunities of the rapidly evolving technology. (AP Photo/Evan Vucci, File)

    The Associated Press

    WASHINGTON — The White House on Tuesday announced new efforts to guide federally backed research on artificial intelligence as the Biden administration looks to get a firmer grip on understanding the risks and opportunities of the rapidly evolving technology.

    Among the moves unveiled by the administration was a tweak to the United States’ strategic plan on artificial intelligence research, which was last updated in 2019, to add greater emphasis on international collaboration with allies.

    White House officials on Tuesday were also hosting a listening session with workers on their firsthand experiences with employers’ use of automated technologies for surveillance, monitoring, evaluation, and management. And the U.S. Department of Education’s Office of Educational Technology issued a report focused on the risks and opportunities related to AI in education.

    “The report recognizes that AI can enable new forms of interaction between educators and students, help educators address variability in learning, increase feedback loops, and support educators,” the White House said in a statement. “It also underscores the risks associated with AI — including algorithmic bias — and the importance of trust, safety, and appropriate guardrails.”

    The U.S. government and private sector in recent months have begun more publicly weighing the possibilities and perils of artificial intelligence.

    Tools like the popular AI chatbot ChatGPT have sparked a surge of commercial investment in other AI tools that can write convincingly human-like text and churn out new images, music and computer code. The ease with which AI technology can be used to mimic humans has also propelled governments around the world to consider how it could take away jobs, trick people and spread disinformation.

    Last week, Senate Majority Leader Chuck Schumer said Congress “must move quickly” to regulate artificial intelligence. He has also convened a bipartisan group of senators to work on legislation.

    The latest efforts by the administration come after Vice President Kamala Harris met earlier this month with the heads of Google, Microsoft, ChatGPT-creator OpenAI and Anthropic. The administration also previously announced an investment of $140 million to establish seven new AI research institutes.

    The White House Office of Science and Technology Policy on Tuesday also issued a new request for public input on national priorities “for mitigating AI risks, protecting individuals’ rights and safety, and harnessing AI to improve lives.”

    [ad_2]

    Source link

  • ChatGPT makes its debut as a smartphone app on iPhones

    ChatGPT makes its debut as a smartphone app on iPhones

    [ad_1]

    ChatGPT is now a smartphone app, which could be good news for people who like to use the artificial intelligence chatbot and bad news for all the clone apps that have tried to profit off the technology.

    The free app became available on iPhones and iPads in the U.S. on Thursday and will later be coming to Android devices. Unlike the desktop web version, the mobile version on Apple’s iOS operating system also enables users to speak to it using their voice.

    The company that makes it, OpenAI, said it will remain ad-free but “syncs your history across devices.”

    “We’re starting our rollout in the U.S. and will expand to additional countries in the coming weeks,” said a blog post announcing the new app, which is described in the App Store as the “official app” by OpenAI.

    It’s been more than five months since OpenAI released ChatGPT to the public, sparking excitement and alarm at its ability to generate convincingly human-like essays, poems, form letters and conversational answers to almost any question. But the San Francisco startup never seemed to be in a hurry to get it onto phones — where most people access the internet.

    “We’re not trying to get people to use it more and more,” OpenAI CEO Sam Altman told U.S. senators this week in a hearing over how to regulate AI systems such as those built by his company.

    The delay in getting the product on phones helped fuel a rise of clones built on similar technology, some of which the security firm Sophos described as “fleeceware” in a report this week because they push unsuspecting users toward enrolling in a free trial that converts into a recurring subscription, or use intrusive advertising techniques.

    Another privacy researcher, Simon Migliano, said the official ChatGPT app might eventually starve similar-sounding apps of new users, but that could take a while because many of those apps were given names deliberately intended to confuse people into thinking they already have the official app. They were also “hyper-optimized” to rank highly in Apple’s App Store search results, said Migliano, head of research at Top10VPN.com.

    “For many of those who have already downloaded a clone, it’s likely they will simply stick with the ChatGPT apps they already have and continue to have their personal data harvested and sold,” Migliano said.

    Altman told Congress this week that his company doesn’t try to maximize engagement because it doesn’t have an advertising-based business, and because it’s costly to train and run its AI models on computer chips known as graphics processing units.

    “In fact, we’re so short on GPUs, the less people use our products, the better,” Altman said.

    The new app does include an option to pay for a premium version of ChatGPT with additional features. Along with those subscriptions, the company makes money from developers and corporations that pay to integrate its AI models into their own apps and products.

    Its chief partner, Microsoft, has invested billions of dollars into the startup and has integrated ChatGPT-like technology into its own products, including a chatbot for its search engine Bing.

    The ChatGPT app will now compete for attention with the Bing chatbot already available on iPhones, and could eventually compete with a mobile version of rival Google’s chatbot, called Bard. Versions of OpenAI’s chatbot technology can also be found in other apps, such as the “My AI” feature on Snapchat.

    [ad_2]

    Source link

  • ChatGPT chief says artificial intelligence should be regulated by a US or global agency

    ChatGPT chief says artificial intelligence should be regulated by a US or global agency

    [ad_1]

    The head of the artificial intelligence company that makes ChatGPT told Congress on Tuesday that government intervention will be critical to mitigating the risks of increasingly powerful AI systems.

    “As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” OpenAI CEO Sam Altman said at a Senate hearing.

    Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”

    His San Francisco-based startup rocketed to public attention after it released ChatGPT late last year. The free chatbot tool answers questions with convincingly human-like responses.

    What started out as a panic among educators about ChatGPT’s use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.

    And while there’s no immediate sign Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

    Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator, but was actually a voice clone trained on Blumenthal’s floor speeches and reciting ChatGPT-written opening remarks.

    The result was impressive, said Blumenthal, but he added, “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”

    The overall tone of senators’ questioning was polite Tuesday, a contrast to past congressional hearings in which tech and social media executives faced tough grillings over the industry’s failures to manage data privacy or counter harmful misinformation. In part, that was because both Democrats and Republicans said they were interested in seeking Altman’s expertise on averting problems that haven’t yet occurred.

    Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them, and expressed particular concern about how future AI systems could destabilize the job market. Altman was largely in agreement, though had a more optimistic take on the future of work.

    Pressed on his own worst fear about AI, Altman mostly avoided specifics, except to say that the industry could cause “significant harm to the world” and that “if this technology goes wrong, it can go quite wrong.”

    But he later proposed that a new regulatory agency should impose safeguards that would block AI models that could “self-replicate and self-exfiltrate into the wild” — hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.

    That focus on a far-off “science fiction trope” of super-powerful AI could make it harder to take action against already existing harms that require regulators to dig deep on data transparency, discriminatory behavior and potential for trickery and disinformation, said a former Biden administration official who co-authored its plan for an AI bill of rights.

    “It’s the fear of these (super-powerful) systems and our lack of understanding of them that is making everyone have a collective freak-out,” said Suresh Venkatasubramanian, a Brown University computer scientist who was assistant director for science and justice at the White House Office of Science and Technology Policy. “This fear, which is very unfounded, is a distraction from all the concerns we’re dealing with right now.”

    OpenAI has expressed those existential concerns since its inception. Co-founded by Altman in 2015 with backing from tech billionaire Elon Musk, the startup has evolved from a nonprofit research lab with a safety-focused mission into a business. Its other popular AI products include the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

    Altman is also planning to embark on a worldwide tour this month to national capitals and major cities across six continents to talk about the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of U.S. lawmakers, several of whom told CNBC they were impressed by his comments.

    Also testifying were IBM’s chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI’s latest model, GPT-4, described as more powerful than ChatGPT.

    The panel’s ranking Republican, Sen. Josh Hawley of Missouri, said the technology has big implications for elections, jobs and national security. He said Tuesday’s hearing marked “a critical first step towards understanding what Congress should do.”

    A number of tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. Altman and Marcus both called for an AI-focused regulator, preferably an international one, with Altman citing the precedent of the U.N.’s nuclear agency and Marcus comparing it to the U.S. Food and Drug Administration. But IBM’s Montgomery instead asked Congress to take a “precision regulation” approach.

    “We think that AI should be regulated at the point of risk, essentially,” Montgomery said, by establishing rules that govern the deployment of specific uses of AI rather than the technology itself.

    [ad_2]

    Source link

  • ChatGPT’s chief testifies before Congress as concerns grow about artificial intelligence risks

    ChatGPT’s chief testifies before Congress as concerns grow about artificial intelligence risks

    [ad_1]

    The head of the artificial intelligence company that makes ChatGPT told Congress on Tuesday that government intervention “will be critical to mitigate the risks of increasingly powerful” AI systems.

    “As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” OpenAI CEO Sam Altman testified at a Senate hearing Tuesday.

    His San Francisco-based startup rocketed to public attention after it released ChatGPT late last year. ChatGPT is a free chatbot tool that answers questions with convincingly human-like responses.

    What started out as a panic among educators about ChatGPT’s use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.

    And while there’s no immediate sign that Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

    Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator, but was actually a voice clone trained on Blumenthal’s floor speeches and reciting a speech written by ChatGPT after he asked the chatbot, “How I would open this hearing?”

    The result was impressive, said Blumenthal, but he added, “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”

    Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them.

    Founded in 2015, OpenAI is also known for other AI products including the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

    Altman is also planning to embark on a worldwide tour this month to national capitals and major cities across six continents to talk about the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of U.S. lawmakers, several of whom told CNBC they were impressed by his comments.

    Also testifying will be IBM’s chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI’s latest model, GPT-4, described as more powerful than ChatGPT.

    “Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” said the panel’s ranking Republican, Sen. Josh Hawley of Missouri. “This hearing marks a critical first step towards understanding what Congress should do.”

    Altman and other tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. In a copy of her prepared remarks, IBM’s Montgomery asks Congress to take a “precision regulation” approach.

    “This means establishing rules to govern the deployment of AI in specific use-cases, not regulating the technology itself,” Montgomery said.

    [ad_2]

    Source link

  • A.I. ‘controls humanity’ in the worst-case scenario but will probably just find us boring, says Stability AI CEO Emad Mostaque

    A.I. ‘controls humanity’ in the worst-case scenario but will probably just find us boring, says Stability AI CEO Emad Mostaque

    [ad_1]

    Emad Mostaque hopes A.I. will find us “a bit boring” but acknowledges that in the worst-case scenario it “basically controls humanity.” 

    Mostaque is CEO of the fast-growing London-based startup Stability AI, which popularized Stable Diffusion. That’s a generative A.I. tool allowing users to create often remarkably sophisticated images using nothing but text prompts. He made the comments in a BBC interview released this weekend.

    “If you have a more capable thing than you, what is democracy in that kind of environment? This is a known unknown,” he told the British broadcaster. “Because we can’t conceive of something more capable than us, but we all know people more capable than us. So, my personal belief is it will be like that movie Her with Scarlett Johansson and Joaquin Phoenix: Humans are a bit boring, and it’ll be like, ‘Goodbye’ and ‘You’re kind of boring.’”

    “But I could be wrong,” he added. “I think it deserves to be discussed in a public sphere.” 

    In March, Mostaque joined Tesla CEO Elon Musk and Apple cofounder Steve Wozniak in signing an open letter calling for pause in A.I. development for anything more advanced than GPT-4, the A.I. chatbot from Microsoft-backed OpenAI, which also makes ChatGPT and DALL-E 2 (the latter, like Stable Diffusion, converts text prompts to images). 

    “If we have agents that are more capable than us that we cannot control that are going across the internet and [are] hooked up and they achieve a level of automation,” he told the BBC, “what does that mean?”

    Stability AI is racing ahead, however, in developing new products—including a text-to-animation tool released this week—and wooing investors. It’s seeking to raise funds at a $4 billion valuation, following a $1 billion valuation last October after raising about $100 million. (Coatue Management and Lightspeed Venture Partners are among its investors.)

    At the same time, Stability AI is being sued by Getty Images in a landmark case over copyright. Such a lawsuit was perhaps inevitable given that text-to-image A.I. models like Stable Diffusion are trained using billions of images pulled from the internet.

    Asked by the BBC what the worst-case scenario might be, Mostaque said: “Worst-case scenario is that it proliferates and basically it controls humanity. Because you could have a million of these things replicating effectively.” 

    Unusually, Stable Diffusion is open source, meaning anyone can examine the code, share it, and use it. 

    In March, Musk, who cofounded and helped fund OpenAI, criticized it for switching away from a nonprofit model, taking hefty investments from Microsoft, and not being open source. He tweeted:

    “OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”

    “I think there shouldn’t have to be a need for trust,” Mostaque told the BBC. “If you build open models and you do it in the open, you should be criticized if you do things wrong and hopefully lauded if you do some things right.”

    [ad_2]

    Steve Mollman

    Source link

  • AI presents political peril for 2024 with threat to mislead voters

    AI presents political peril for 2024 with threat to mislead voters

    [ad_1]

    WASHINGTON — Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video and audio that was realistic enough to fool voters and perhaps sway an election.

    The synthetic images that emerged were often crude, unconvincing and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away.

    No more.

    Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.

    The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.

    “We’re not prepared for this,” warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. ”To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”

    AI experts can quickly rattle off a number of alarming scenarios in which generative AI is used to create synthetic media for the purposes of confusing voters, slandering a candidate or even inciting violence.

    Here are a few: Automated robocall messages, in a candidate’s voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.

    “What if Elon Musk personally calls you and tells you to vote for a certain candidate?” said Oren Etzioni, the founding CEO of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. “A lot of people would listen. But it’s not him.”

    Former President Donald Trump, who is running in 2024, has shared AI-generated content with his followers on social media. A manipulated video of CNN host Anderson Cooper that Trump shared on his Truth Social platform on Friday, which distorted Cooper’s reaction to the CNN town hall this past week with Trump, was created using an AI voice-cloning tool.

    A dystopian campaign ad released last month by the Republican National Committee offers another glimpse of this digitally manipulated future. The online ad, which came after President Joe Biden announced his reelection campaign, and starts with a strange, slightly warped image of Biden and the text “What if the weakest president we’ve ever had was re-elected?”

    A series of AI-generated images follows: Taiwan under attack; boarded up storefronts in the United States as the economy crumbles; soldiers and armored military vehicles patrolling local streets as tattooed criminals and waves of immigrants create panic.

    “An AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024,” reads the ad’s description from the RNC.

    The RNC acknowledged its use of AI, but others, including nefarious political campaigns and foreign adversaries, will not, said Petko Stoyanov, global chief technology officer at Forcepoint, a cybersecurity company based in Austin, Texas. Stoyanov predicted that groups looking to meddle with U.S. democracy will employ AI and synthetic media as a way to erode trust.

    “What happens if an international entity — a cybercriminal or a nation state — impersonates someone. What is the impact? Do we have any recourse?” Stoyanov said. “We’re going to see a lot more misinformation from international sources.”

    AI-generated political disinformation already has gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children supposedly learning satanism in libraries.

    AI images appearing to show Trump’s mug shot also fooled some social media users even though the former president didn’t take one when he was booked and arraigned in a Manhattan criminal court for falsifying business records. Other AI-generated images showed Trump resisting arrest, though their creator was quick to acknowledge their origin.

    Legislation that would require candidates to label campaign advertisements created with AI has been introduced in the House by Rep. Yvette Clarke, D-N.Y., who has also sponsored legislation that would require anyone creating synthetic images to add a watermark indicating the fact.

    Some states have offered their own proposals for addressing concerns about deepfakes.

    Clarke said her greatest fear is that generative AI could be used before the 2024 election to create a video or audio that incites violence and turns Americans against each other.

    “It’s important that we keep up with the technology,” Clarke told The Associated Press. “We’ve got to set up some guardrails. People can be deceived, and it only takes a split second. People are busy with their lives and they don’t have the time to check every piece of information. AI being weaponized, in a political season, it could be extremely disruptive.”

    Earlier this month, a trade association for political consultants in Washington condemned the use of deepfakes in political advertising, calling them “a deception” with “no place in legitimate, ethical campaigns.”

    Other forms of artificial intelligence have for years been a feature of political campaigning, using data and algorithms to automate tasks such as targeting voters on social media or tracking down donors. Campaign strategists and tech entrepreneurs hope the most recent innovations will offer some positives in 2024, too.

    Mike Nellis, CEO of the progressive digital agency Authentic, said he uses ChatGPT “every single day” and encourages his staff to use it, too, as long as any content drafted with the tool is reviewed by human eyes afterward.

    Nellis’ newest project, in partnership with Higher Ground Labs, is an AI tool called Quiller. It will write, send and evaluate the effectiveness of fundraising emails –- all typically tedious tasks on campaigns.

    “The idea is every Democratic strategist, every Democratic candidate will have a copilot in their pocket,” he said.

    ___

    Swenson reported from New York.

    ___

    The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

    ___

    Follow the AP’s coverage of misinformation at https://apnews.com/hub/misinformation and coverage of artificial intelligence at https://apnews.com/hub/artificial-intelligence

    [ad_2]

    Source link

  • AI presents political peril for 2024 with threat to mislead voters: ‘We’re not prepared for this’

    AI presents political peril for 2024 with threat to mislead voters: ‘We’re not prepared for this’

    [ad_1]

    WASHINGTON — Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video and audio that was realistic enough to fool voters and perhaps sway an election.

    The synthetic images that emerged were often crude, unconvincing and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away.

    No more.

    Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.

    The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.

    “We’re not prepared for this,” warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. ”To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”

    AI experts can quickly rattle off a number of alarming scenarios in which generative AI is used to create synthetic media for the purposes of confusing voters, slandering a candidate or even inciting violence.

    Here are a few: Automated robocall messages, in a candidate’s voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.

    “What if Elon Musk personally calls you and tells you to vote for a certain candidate?” said Oren Etzioni, the founding CEO of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. “A lot of people would listen. But it’s not him.”

    Former President Donald Trump, who is running in 2024, has shared AI-generated content with his followers on social media. A manipulated video of CNN host Anderson Cooper that Trump shared on his Truth Social platform on Friday, which distorted Cooper’s reaction to the CNN town hall this past week with Trump, was created using an AI voice-cloning tool.

    A dystopian campaign ad released last month by the Republican National Committee offers another glimpse of this digitally manipulated future. The online ad, which came after President Joe Biden announced his reelection campaign, and starts with a strange, slightly warped image of Biden and the text “What if the weakest president we’ve ever had was re-elected?”

    A series of AI-generated images follows: Taiwan under attack; boarded up storefronts in the United States as the economy crumbles; soldiers and armored military vehicles patrolling local streets as tattooed criminals and waves of immigrants create panic.

    “An AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024,” reads the ad’s description from the RNC.

    The RNC acknowledged its use of AI, but others, including nefarious political campaigns and foreign adversaries, will not, said Petko Stoyanov, global chief technology officer at Forcepoint, a cybersecurity company based in Austin, Texas. Stoyanov predicted that groups looking to meddle with U.S. democracy will employ AI and synthetic media as a way to erode trust.

    “What happens if an international entity — a cybercriminal or a nation state — impersonates someone. What is the impact? Do we have any recourse?” Stoyanov said. “We’re going to see a lot more misinformation from international sources.”

    AI-generated political disinformation already has gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children supposedly learning satanism in libraries.

    AI images appearing to show Trump’s mug shot also fooled some social media users even though the former president didn’t take one when he was booked and arraigned in a Manhattan criminal court for falsifying business records. Other AI-generated images showed Trump resisting arrest, though their creator was quick to acknowledge their origin.

    Legislation that would require candidates to label campaign advertisements created with AI has been introduced in the House by Rep. Yvette Clarke, D-N.Y., who has also sponsored legislation that would require anyone creating synthetic images to add a watermark indicating the fact.

    Some states have offered their own proposals for addressing concerns about deepfakes.

    Clarke said her greatest fear is that generative AI could be used before the 2024 election to create a video or audio that incites violence and turns Americans against each other.

    “It’s important that we keep up with the technology,” Clarke told The Associated Press. “We’ve got to set up some guardrails. People can be deceived, and it only takes a split second. People are busy with their lives and they don’t have the time to check every piece of information. AI being weaponized, in a political season, it could be extremely disruptive.”

    Earlier this month, a trade association for political consultants in Washington condemned the use of deepfakes in political advertising, calling them “a deception” with “no place in legitimate, ethical campaigns.”

    Other forms of artificial intelligence have for years been a feature of political campaigning, using data and algorithms to automate tasks such as targeting voters on social media or tracking down donors. Campaign strategists and tech entrepreneurs hope the most recent innovations will offer some positives in 2024, too.

    Mike Nellis, CEO of the progressive digital agency Authentic, said he uses ChatGPT “every single day” and encourages his staff to use it, too, as long as any content drafted with the tool is reviewed by human eyes afterward.

    Nellis’ newest project, in partnership with Higher Ground Labs, is an AI tool called Quiller. It will write, send and evaluate the effectiveness of fundraising emails –- all typically tedious tasks on campaigns.

    “The idea is every Democratic strategist, every Democratic candidate will have a copilot in their pocket,” he said.

    ___

    Swenson reported from New York.

    ___

    The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

    ___

    Follow the AP’s coverage of misinformation at https://apnews.com/hub/misinformation and coverage of artificial intelligence at https://apnews.com/hub/artificial-intelligence

    [ad_2]

    Source link