ReportWire

Tag: iab-artificial intelligence

  • Chinese artists boycott big social media platform over AI-generated images | CNN Business

    Chinese artists boycott big social media platform over AI-generated images | CNN Business

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Artists across China are boycotting one of the country’s biggest social media platforms over complaints about its AI image generation tool.

    The controversy began in August when an illustrator who goes by the name Snow Fish accused the privately owned social media site Xiaohongshu of using her work to train its AI tool, Trik AI, without her knowledge or permission.

    Trik AI specializes in generating digital art in the style of traditional Chinese paintings; it is still undergoing testing and has not yet been formally launched.

    Snow Fish, whom CNN is identifying by her Xiaohongshu username for privacy reasons, said she first became aware of the issue when friends sent her posts of artwork from the platform that looked strikingly similar to her own style: sweeping brush-like strokes, bright pops of red and orange, and depictions of natural scenery.

    “Can you explain to me, Trik AI, why your AI-generated images are so similar to my original works?” Snow Fish wrote in a post which quickly circulated online among her followers and the artist community.

    The controversy erupted just weeks after China unveiled rules for generative AI, becoming one of the first governments to regulate the technology as countries around the world wrestle with AI’s potential impact on jobs, national security and intellectual property.

    Screenshots of AI-generated artworks on Xiaohongshu, taken by the artist Snow Fish.

    Trik AI and Xiaohongshu, which says it has 260 million monthly active users, do not publicize what materials are used to train the program and have not publicly commented on the allegations.

    The companies have not responded to multiple requests from CNN for comment.

    But Snow Fish said a person using the official Trik AI account had apologized to her in a private message, acknowledging that her art had been used to train the program and agreed to remove the posts in question. CNN has reviewed the messages.

    However, Snow Fish wants a public apology. The controversy has fueled online protests on the Chinese internet against the creation and use of AI-generated images, with several other artists claiming their works had been similarly used without their knowledge.

    Hundreds of artists have posted banners on Xiaohongshu saying “No to AI-generated images,” while a related hashtag has been viewed more than 35 million times on the Chinese Twitter-like platform Weibo.

    The boycott in China comes as debates about the use of AI in arts and entertainment are playing out globally, including in the United States, where striking writers and actors have ground most film and television production to a halt in recent months over a range of issues — including studios’ use of AI.

    Many of the artists boycotting Xiaohongshu have called for better rules to protect their work online — echoing similar complaints from artists around the world worried about their livelihoods.

    These concerns have grown as the race to develop AI heats up, with new tools developed and released almost faster than governments can regulate them — ranging from chatbots such as OpenAI’s ChatGPT to Google’s Bard.

    China’s tech giants, too, are rapidly developing their own generative artificial intelligence, from Baidu’s ERNIE Bot launched in March to SenseTime’s chatbot SenseChat.

    Besides Trik AI, Xiaohongshu has also developed a new function called “Ci Ke” which allows users to post content using AI-generated images.

    For artists like Snow Fish, the technology behind AI isn’t the problem, she said; it’s the way these tools use their work without permission or credit.

    Many AI models are trained from the work of human artists by quietly scraping images of their artwork from the internet without consent or compensation.

    Snow Fish added that these complaints had been slowly growing within the artist community but had mostly been privately shared rather than openly protested.

    “It’s an outbreak this time,” she said. “If it easily goes away without any splash, people will maintain silent, and those AI developers will keep harming our rights.”

    Another Chinese illustrator Zhang, who CNN is identifying by his last name for privacy reasons, joined the boycott in solidarity. “They’re shameless,” said Zhang. “They didn’t put in any effort themselves, they just took parts from other artists’ work and claimed it as their own, is that appropriate?”

    “In the future, AI images will only be cheaper in people’s eyes, like plastic bags. They will become widespread like plastic pollution,” he said, adding that tech leaders and AI developers care more about their own profits than about artists’ rights.

    Tianxiang He, an associate professor of law City University of Hong Kong, said the use of AI-generated images also raises larger questions among the artistic community about what counts as “real” art, and how to preserve its “spiritual value.”

    Similar boycotts have been seen elsewhere around the world, against popular AI image generation tools such as Stable Diffusion, released last year by London-based Stability AI, and California-based Midjourney.

    Stable Diffusion is embroiled in an ongoing lawsuit brought by stock image giant Getty Images, alleging copyright infringement.

    Fareed Zakaria special MoMA AI Art

    GPS web extra: How does AI make art?

    Despite the speed at which AI image generation tools are being developed, there is “no global consensus about how to regulate this kind of training behavior,” said He.

    He added that many such tools are developed by tech giants who own huge databases, which allows them to “do a lot of things … and they don’t care whether it’s protected by the law or not.”

    Because Trik AI has a smaller database to pull from, the similarities between its AI-generated content and artists’ original works are more obvious, making an easier legal case, he said.

    Cases of copyright infringement would be harder to detect if more works were put in a larger database, he added.

    Governments around the world are now grappling with how to set global standards for the wide-ranging technology. The European Union was one of the first in the world to set rules in June on how companies can use AI, with the United States still holding discussions with Capitol Hill lawmakers and tech companies to develop legislation.

    China was also an early adopter of AI regulation, publishing new rules that took effect in August. But the final version relaxed some of the language that had been included in earlier drafts.

    Experts say major powers like China likely prioritize centralizing power from tech giants when drafting regulations, and pulling ahead in the global tech race, rather than focusing on individuals’ rights.

    He, the Hong Kong law professor, called the regulations a “very broad general regulatory framework” that provide “no specific control mechanisms” to regulate data mining.

    “China is very hesitant to enact anything related to say yes or no to data mining, because that will be very dangerous,” he said, adding that such a law could strike a blow to the emerging market, amid an already slow national economy.

    Source link

  • Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business


    Hong Kong
    CNN
     — 

    Chinese tech firms Baidu and SenseTime launched their ChatGPT-style AI bots to the public on Thursday, marking a new milestone in the global AI race.

    Baidu has opened public access to its ERNIE Bot, allowing users to conduct AI-powered searches or carry out an array of tasks, from creating videos to providing summaries of complex documents.

    The news sent its shares 3.1% higher in New York on Wednesday and 4.7% higher in Hong Kong on Thursday.

    Baidu (BIDU) is among the first companies in China to get regulatory approval for the rollout, and it is the first to launch this type of service publicly, according to a person familiar with the matter.

    Until Thursday, ERNIE Bot, also called “Wenxin Yiyan” in Chinese, had been offered only to corporate clients or select members of the public who requested access through a waitlist.

    Meanwhile, SenseTime, an AI startup based in Hong Kong, also announced the public launch of its SenseChat platform on Thursday. The company’s shares surged 4% in Hong Kong following the news

    “We are pleased to announce that starting today, it is fully available to serve all users,” a SenseTime spokesperson told CNN in a statement.

    China published new rules on generative AI in July, becoming one of the world’s first countries to regulate the industry. The measures took effect on August 15.

    Baidu has been a frontrunner in China in the race to capitalize on the excitement around generative artificial intelligence, the technology that underpins systems such as ChatGPT or its successor, GPT-4. The latter has impressed users with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Baidu announced its own iteration in February, giving it an early advantage in China, according to analysts. It unveiled ERNIE a month later, showing how it could generate a newsletter, come up with a corporate slogan and solve a math riddle.

    Since then, competitors such as Alibaba (BABA) and SenseTime have announced plans to launch their own ChatGPT-style tools, adding to the list of Chinese businesses jumping on the bandwagon. Alibaba told CNN Thursday that it had filed for regulatory approval for its own bot, which was introduced in April.

    The company is now waiting to officially launch and “the initial list of companies that have received the approval is expected to be released by relevant local departments within one week,” said an Alibaba Cloud spokesperson.

    Some critics say the new offerings from Chinese firms will add fuel to an existing US-China rivalry in emerging technologies. Baidu CEO Robin Li has tried to shake off that comparison, saying previously that the company’s platform “is not a tool for the confrontation between China and the United States.”

    The firm’s new feature — which will be embedded in its popular search engine, among its other offerings — follows a similar feature introduced by Alphabet’s Google (GOOGL) in May, which allows users to search the web using its AI chatbot.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses, such as text, images, audio and video.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    While ERNIE Bot is available globally, its interface is in Chinese, though users will be able to enter both Chinese and English prompts, a Baidu spokesperson told CNN.

    SenseTime, which unveiled its service in April, has touted a range of features, which it says allow users to write or debug code more efficiently or receive personalized medical advice from a virtual health consultation assistant.

    Source link

  • Biden teases forthcoming executive order on AI | CNN Business

    Biden teases forthcoming executive order on AI | CNN Business



    CNN
     — 

    The White House plans to introduce a highly anticipated executive order in the coming weeks dealing with artificial intelligence, President Joe Biden said Wednesday.

    “This fall, I’m going to take executive action, and my administration is going to continue to work with bipartisan legislation,” Biden said, “so America leads the way toward responsible AI innovation.”

    Biden offered no details on the contents of the coming order, which the White House had first announced in July. But his remarks offer greater insight into his administration’s timing.

    Biden’s signing of the order would build on an earlier administration proposal for an “AI Bill of Rights.” Civil society groups have urged the Biden administration to require federal agencies to implement the AI Bill of Rights as part of any executive order on the technology. Meanwhile, the US Senate is continuing to educate lawmakers on artificial intelligence in preparation for months of legislative work on the issue.

    In Wednesday’s remarks during a meeting of the Presidential Council of Advisors on Science and Technology, Biden described the recent conversations he’s had with AI leaders and experts.

    “Vast differences exist among them in terms of what potential it has, what dangers there are, and so, I have a keen interest in AI,” Biden said. “I’ve convened key experts on how to harness the power of artificial intelligence for good while protecting people from the profound risk it also presents.”

    “We can’t kid ourselves,” Biden continued. “[There is] profound risk if we don’t do it well.”

    Biden reiterated the United States’ commitment to working with international partners including the United Kingdom on developing safeguards for artificial intelligence.

    The meeting also saw presidential advisers showcasing to Biden several use cases for artificial intelligence. Maria Zuber, the panel’s co-chair, said the examples Biden would see during the meeting would include the use of AI to predict extreme weather linked to climate change; to “create materials that have properties we’ve never been able to create before”; and to “understand the origins of the universe, which is literally as big as it gets.”

    Source link

  • Ringo Starr says The Beatles would ‘never’ fake John Lennon’s vocals with AI on new song | CNN

    Ringo Starr says The Beatles would ‘never’ fake John Lennon’s vocals with AI on new song | CNN



    CNN
     — 

    Ringo Starr is doubling down about the authenticity of the vocals on the highly anticipated new Beatles song recently teased by former bandmate Paul McCartney.

    Starr spoke with Rolling Stone for an upcoming podcast, in which he ensured that they would “never” fake the late John Lennon’s vocals for the new track, which instead uses AI to clean up previously recorded snippets.

    The song will also feature the voice of the late George Harrison, Starr confirmed.

    Paul McCartney says a ‘final’ Beatles song is coming

    “This was beautiful,” he said, noting, “it’s the final track you’ll ever hear with the four lads. And that’s a fact.”

    McCartney attempted to clarify last month how artificial intelligence is being used on what he said will be the “final” Beatles song.

    “We’ve seen some confusion and speculation about it,” he wrote in a note posted on his verified Instagram story at the time. “Seems to be a lot of guess work out there.”

    “Can’t say too much at this stage but to be clear, nothing has been artificially or synthetically created. It’s all real and we all play on it,” he added. “We cleaned up some existing recordings – a process which has gone on for years.”

    In a June 13 interview with BBC Radio 4’s “Today” program, the legendary musician, 81, said that AI technology was being used to release a “new” track featuring all four Beatles, including fellow band members Lennon and Harrison, who died in 1980 and 2001, respectively.

    “When we came to make what will be the last Beatles record – it was a demo that John had that we worked on and we just finished it up, it will be released this year – and we were able to take John’s voice and get it pure through this AI,” McCartney said. “So then we were able to mix the record as you would normally do.”

    Starr, meanwhile, is about to celebrate his 83rd birthday on July 7.

    The music icon, who just finished a spring tour with his All-Starr Band, told Rolling Stone that he’s feeling great. “You never know when you’re gonna drop, that’s the thing,” he added. “And I’m not dropping yet.”

    Source link

  • Exclusive: 42% of CEOs say AI could destroy humanity in five to ten years | CNN Business

    Exclusive: 42% of CEOs say AI could destroy humanity in five to ten years | CNN Business


    New York
    CNN Business
     — 

    Many top business leaders are seriously worried that artificial intelligence could pose an existential threat to humanity in the not-too-distant future.

    Forty-two percent of CEOs surveyed at the Yale CEO Summit this week say AI has the potential to destroy humanity five to ten years from now, according to survey results shared exclusively with CNN.

    “It’s pretty dark and alarming,” Yale professor Jeffrey Sonnenfeld said in a phone interview, referring to the findings.

    The survey, conducted at a virtual event held by Sonnenfeld’s Chief Executive Leadership Institute, found little consensus about the risks and opportunities linked to AI.

    Sonnenfeld said the survey included responses from 119 CEOs from a cross-section of business, including Walmart CEO Doug McMillion, Coca-Cola CEO James Quincy, the leaders of IT companies like Xerox and Zoom as well as CEOs from pharmaceutical, media and manufacturing.

    The business leaders displayed a sharp divide over just how dangerous AI is to civilization.

    While 34% of CEOs said AI could potentially destroy humanity in ten years and 8% said that could happen in five years, 58% said that could never happen and they are “not worried.”

    In a separate question, Yale found that 42% of the CEOs surveyed say the potential catastrophe of AI is overstated, while 58% say it is not overstated.

    The findings come just weeks after dozens of AI industry leaders, academics and even some celebrities signed a statement warning of an “extinction” risk from AI.

    That statement, signed by OpenAI CEO Sam Altman, Geoffrey Hinton, the “godfather of AI” and top executives from Google and Microsoft, called for society to take steps to guard against the dangers of AI.

    “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said.

    Hinton recently decided to sound the alarm on the technology he helped develop after worrying about just how intelligent it has become.

    “I’m just a scientist who suddenly realized that these things are getting smarter than us,” Hinton told CNN’s Jake Tapper in May. “I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us.”

    Hinton told CNN that if AI “gets to be much smarter than us, it will be very good at manipulation,” including “getting around restrictions we put on it.”

    While business leaders debate the dangers of AI, the CEOs surveyed by Yale displayed a degree of agreement about the rewards.

    Just 13% of the CEOs said the potential opportunity of AI is overstated, while 87% said it is not.

    The CEOs indicated AI will have the most transformative impact in three key industries: healthcare (48%), professional services/IT (35%) and media/digital (11%).

    As some inside and outside the tech world debate doomsday scenarios around AI, there are likely to be more immediate impacts, including the risks of misinformation and the loss of jobs.

    Sonnenfeld, the Yale management guru, told CNN business leaders break down into five distinct camps when it comes to AI.

    The first group, as described by Sonnenfeld, includes “curious creators” who are “naïve believers” who argue everything you can do, you should do.

    “They are like Robert Oppenheimer, before the bomb,” Sonnenfeld said, referring to the American physicist known as the “father of the atomic bomb.”

    Then there are the “euphoric true believers” who only see the good in technology, Sonnenfeld said.

    Noting the AI boom set off by the popularity of ChatGPT and other new tools, Sonnenfeld described “commercial profiteers” who are enthusiastically seeking to cash in on the new technology. “They don’t know what they’re doing, but they’re racing into it,” he said.

    And then there are the two camps pushing for an AI crackdown of sorts: alarmist activists and global governance advocates.

    “These five groups are all talking past each other, with righteous indignation,” Sonnenfeld said.

    The lack of consensus around how to approach AI underscores how even captains of industry are still trying to wrap their heads around the risks and rewards around what could be a real gamechanger for society.

    Source link

  • 5 things to know for May 31: DeSantis, Artificial intelligence, Debt deal, UK, Ukraine | CNN

    5 things to know for May 31: DeSantis, Artificial intelligence, Debt deal, UK, Ukraine | CNN



    CNN
     — 

    One-time Silicon Valley darling Elizabeth Holmes reported to prison Tuesday to begin serving out her 11-year sentence after being convicted on multiple charges of defrauding investors. Her life in prison will be quite a change, with mandatory jobs, very early mornings and no black turtlenecks.

    Here’s what else you need to know to Get Up to Speed and On with Your Day.

    (You can get “CNN’s 5 Things” delivered to your inbox daily. Sign up here.)

    Ron DeSantis officially kicked off his 2024 presidential campaign Tuesday in Iowa. While speaking to reporters after the event at an evangelical church outside Des Moines, the Florida governor leveled a series of shots at his rival, former President Donald Trump, painting him as selfish, unprincipled and petty. As the opening contest in the GOP nominating fight, Iowa holds a unique role in sizing up the presidential field. That’s especially important this election season since it’s the first time in over a century a former president is seeking to return to the White House. Meanwhile, Florida officials just changed state campaign finance guidelines in a very specific way to allow DeSantis’ allies to initiate a specific kind of transfer to move tens of millions of dollars to a super PAC supporting his campaign. The planned move has already drawn a watchdog complaint with the Federal Election Commission.

    Dozens of industry leaders and academics in the field of artificial intelligence have called for greater global attention to the possible threat of “extinction from AI.” A statement, signed by leading industry officials like OpenAI CEO Sam Altman and Geoffrey Hinton — the so-called “godfather” of artificial intelligence — highlights wide-ranging concerns about the ultimate danger of unchecked AI. Experts say humanity is still a ways off from the prospect of science-fiction-like AI overlords, but the flood of hype and investment into the AI industry has led to calls for regulation now before any major mishaps occur. The growing AI arms race has already generated more immediate concerns. Lawmakers, advocacy groups and tech insiders have raised alarms about the potential for AI-powered language models like ChatGPT to spread misinformation and displace jobs.

    AI developers are warning ‘risk of extinction’ to humans

    The House of Representatives is on track to vote today on a bill to suspend the nation’s debt limit through January 1, 2025. The bill already cleared a key hurdle Tuesday evening when the powerful House Rules Committee voted 7-6 to advance it to the floor. That’s a win for House Speaker Kevin McCarthy, who was tasked with convincing members of the committee to vote in favor even though some fellow Republicans don’t approve of the bill and have vowed to sink it in the chamber. Still, it appears a wide range of House members on both sides of the aisle are poised to support the deal. The Congressional Budget Office estimates the bill would reduce budget deficits by $1.5 trillion over the next 10 years, and reduce discretionary spending by a projected $1.3 trillion from 2024 to 2033.

    exp debt limit bill rana foroohar intv 053102ASEG1 cnni us_00005607.png

    U.S. House to vote on debt limit bill amid criticism

    The UK’s inflation problems are getting so out of hand, officials are considering food price caps to curb the crisis. New data released this week shows the cost of store items, a metric known as shop price inflation, rose 9% through the year to May. That’s the highest it’s ever been since such stats were first recorded in 2005. Prime Minister Rishi Sunak is considering asking retailers to cap the price of essential food items, something the UK government tried in the 1970s to tepid effect. Economists say capping prices leads to lower supply and higher demand, resulting in shortages. The enduring shadow of Brexit still looms large over Britain’s economy, and some experts say the government should be focused on shedding burdensome regulations that resulted from the move instead of trying to control prices. 

    Russia’s war on Ukraine is increasingly spilling into Russian territory. The governor of Russia’s Belgorod region, which borders Ukraine, said four people were recently injured in a “massive strike” there. This is the latest in a series of strikes against Russian targets by Ukrainian forces. Russian President Vladimir Putin addressed the spate of attacks, saying Ukraine “chose the path of intimidation,” and is provoking Russia to “mirror actions.” Amid all the violence, scientists have another concern: International Atomic Energy Agency chief Rafael Grossi has outlined a plan to protect Ukraine’s Zaporizhzhia nuclear power plant and asked that Russia and Ukraine observe them to ensure the plant’s safety and security. 

    exp russia ukraine drone strikes sam kiley FST 053112ASEG2 cnni world_00002524.png

    Russia blames Ukraine after drone strikes in Moscow

    Alleged Russian ‘spy’ whale now in Swedish waters

    Patiently waiting for a mystery novel series about spy whales. 

    Michael Jordan was a ‘horrible player’ and ‘horrible to play with,’ says former Chicago Bulls teammate Scottie Pippen

    Dang, Scottie. Tell us how you really feel!

    Venice authorities discover why canal turned fluorescent green

    Given all the fluorescent things it could have been, this is quite a relief.

    This is the world’s first 3D-printed, cultivated fish fillet

    Mmm, science is delicious.

    Air New Zealand to weigh passengers before they board the airplane

    What an innovative way to make air travel even more stressful

    1.4 million

    That’s about how many people have now been displaced in Sudan since a civil war erupted there in April, the United Nations Office for the Coordination of Humanitarian Affairs says. Hundreds have been killed in the violence, and reports of sexual assault are increasingly common. 

    “I just tried to follow the police commands but I guess that didn’t work.”

    — Aderrien Murry, the 11-year-old boy who was shot in the chest less than two weeks ago by a Mississippi police officer after he called 911 for help. The boy said he prayed and sang in the moments after he was shot as his mother tried to stop the bleeding. Aderrien’s family wants the officer fired, and is seeking restitution from the state. 

    Check your local forecast here>>>

    A perfect day

    Bless people who put little collar cameras on their outdoor cats. These videos bring a type of peace I didn’t know existed. (Click here to view)

    Source link

  • The world’s biggest ad agency is going all in on AI with Nvidia’s help | CNN Business

    The world’s biggest ad agency is going all in on AI with Nvidia’s help | CNN Business


    London
    CNN
     — 

    WPP, the world’s largest advertising agency, has teamed up with chipmaker Nvidia to create ads using generative artificial intelligence.

    The companies announced the partnership Monday, with Nvidia

    (NVDA)
    CEO Jensen Huang unveiling WPP’s new content engine during a demo at Computex Taipei.

    “Generative AI is changing the world of marketing at incredible speed. This new technology will transform the way that brands create content for commercial use,” WPP CEO Mark Read said in a statement.

    The platform will enable WPP

    (WPP)
    ’s creative teams to integrate content from organizations such as Adobe and Getty Images with generative AI to produce advertising campaigns “more efficiently and at scale,” according to WPP

    (WPP)
    . This would enable companies to make large volumes of advertising content, such as images or videos, “more tailored and immersive,” the company added.

    In the demo screened by Huang, WPP had created realistic footage of a car driving through a desert.

    The new AI-powered content engine means that same car could be placed on a street in London or pictured in Rio de Janeiro to target the Brazilian market — all without the need for costly on location production.

    Just as advertising campaigns can be rapidly adapted for different countries or cities, they can also be customized for different digital channels, such as Facebook or TikTok, and their users.

    “You can build very finely tuned campaigns to resonate with an audience… On the other hand, you could make up imaginary scenarios that never existed in real life,” Greg Estes, vice president of developer programs at Nvidia told CNN.

    The platform is the latest example of how AI is being rapidly deployed by major companies to enhance productivity and deliver new products to customers. Many in the advertising and media industries are concerned about threats to their jobs because of the way that AI is able to aggregate information and create visual content indistinguishable from photography.

    WPP said its new platform “outperforms current methods” of having people “manually create hundreds of thousands of pieces of content using disparate data coming from disconnected tools and systems.” In other words, the new technology could mean that much smaller creative teams are ultimately able to do the same amount of work.

    “It’s much easier to identify the jobs that AI will disrupt than it is to identify the jobs that AI will create,” Read told the Financial Times Monday. “We’ve applied AI a lot to our media business, but very little to the creative parts of our business.”

    Nvidia’s Huang said: “The world’s industries, including the $700 billion digital advertising industry, are racing to realize the benefits of AI,” adding that WPP would now enable brands to “deploy product experiences and compelling content at a level of realism and scale never possible before.”

    Source link

  • 5 things to know for May 8: Texas shooting, King Charles, Title 42, Measles, ChatGPT | CNN

    5 things to know for May 8: Texas shooting, King Charles, Title 42, Measles, ChatGPT | CNN



    CNN
     — 

    American flags will be lowered to half-staff this week at the White House, on military bases, and at all public buildings to honor the victims of the deadly mass shooting in Texas over the weekend. In the wake of the massacre, President Joe Biden again urged Congress to act: “Too many families have empty chairs at their dinner tables. Tweeted thoughts and prayers are not enough,” he said.

    Here’s what else you need to know to Get Up to Speed and On with Your Day.

    (You can get “CNN’s 5 Things” delivered to your inbox daily. Sign up here.)

    Eight people were killed and at least seven others were wounded when a gunman opened fire at an outlet mall in Allen, Texas, on Saturday — the latest mass shooting to shatter an American community. A Dallas-area medical group said it was treating patients ranging from age from 5 to 61 years old. The 33-year-old shooter was killed by a police officer who was already at the Dallas-area mall on an unrelated call. The gunman was armed with an AR-15 style rifle and had multiple weapons in his vehicle, according to police. The shooter’s motive remains unclear at this time, but officials are investigating his potential ties to right-wing extremism after he was found with an insignia on his clothing worn by some members of extremist groups, a law enforcement source said. Officials have also found he had an extensive social media presence that included neo-Nazi and White supremacist-related posts.

    Britain’s King Charles III was crowned Saturday in a once-in-a-generation royal event witnessed by hundreds of high-profile guests inside Westminster Abbey, as well as tens of thousands of well-wishers who gathered in central London. Scores of foreign dignitaries, British officials, celebrities and faith leaders attended the deeply religious ceremony. Once the King was crowned, his wife, Queen Camilla, was crowned in her own shorter ceremony. On Sunday, thousands of events and parties took place across the UK as part of the “Coronation Big Lunch.” But the historic weekend did not go without a display of dissidence. Police arrested more than 50 people during the coronation after controversially promising a “robust” approach to protesters.

    Missed it? Here’s King Charles’ coronation in 3 minutes

    The US is expecting to see an influx of border crossings when Title 42, the Trump-era policy that allowed officials to swiftly expel migrants who crossed the border illegally during the Covid-19 pandemic, expires on Thursday. Without Title 42, the primary border enforcement tool since March 2020, authorities will be returning to decades-old protocols at a time of unprecedented mass migration in the region, raising concerns within the Biden administration about a surge in the immediate aftermath of the policy’s lifting. Also on Thursday, the House is set to vote on Republicans’ wide-ranging border security package, GOP leadership sources told CNN. Last month, House Majority Leader Steve Scalise said Republicans have the necessary votes to pass the legislation in the chamber.

    exp NYC prepares migrant surge Pazmino 05072PSEG1 cnn world_00002001.png

    U.S. prepares for a surge of migrants ahead of the end of Title 42

    A child in Maine has tested positive for measles, officials said, marking the first case in the state since 2019. Measles was declared eliminated from the US in 2000 thanks to an intensive vaccination program, according to the CDC. But vaccination rates in the US have dropped in recent years, sparking new outbreaks. The CDC recommends all children get two doses of the MMR (measles-mumps-rubella) vaccine; the first dose between 12 to 15 months of age and the second between the ages of 4 to 6. The child who tested positive had received a dose of the measles vaccine, but is being considered “infectious out of an abundance of caution,” the Maine CDC said. There have been a total of 10 documented cases of measles in eight states this year.

    vaccines 2 cfb

    How vaccines stop the spread of viruses

    ChatGPT, a chatbot powered by artificial intelligence, can pick stocks better than your fund manager, analysts say. A recent experiment found that the bot far outperformed some popular UK investment funds — and funds managed by HSBC and Fidelity were among those selected. Between March 6 and April 28, a dummy portfolio of 38 stocks gained 4.9% while 10 leading investment funds clocked an average loss of 0.8%, the results showed. The analysts asked ChatGPT to select stocks based on some common criteria, including picking companies with a low level of debt and a track record of growth. Microsoft, Netflix, and Walmart were among the companies selected. While major funds have used AI for years to support their investment decisions, analysts say ChatGPT has put the technology in the hands of the general public — and it’s showing it can potentially disrupt the finance industry. 

    MTV Movie & TV Awards 2023: See who won

    Tom Cruise accepted an award for “Top Gun: Maverick” while flying a plane — because he’s Tom Cruise. Here are the other stars who received golden popcorn statuettes on Sunday.

    A mother-daughter moment: Regal twinning at coronation catches eyes

    Princess Catherine of Wales and her daughter, Princess Charlotte, made a statement in matching silver headpieces. See the photo here.

    Bronny James, son of NBA superstar LeBron James, commits to the University of Southern California

    The NBA’s all-time leading scorer made headlines last year when he said he wanted to play his final season in the league alongside his son Bronny. The father-son duo is now one step closer to that reality.

    ‘Saturday Night Live’ didn’t air a new episode this past weekend

    Former cast member Pete Davidson was set to return as host for “SNL” but things didn’t go as planned due to the ongoing film and TV writers strike.

    Climate activists dye iconic Italian fountain water black

    Onlookers snapped pictures as protesters were arrested for defacing this popular monument.

    111 degrees Fahrenheit

    That’s how high temperatures reached in Vietnam over the weekend, the highest ever recorded in the country. Neighboring Laos and Thailand also recently shattered various temperature records as a brutal heat wave continues to grip Southeast Asia. 

    “This tangled web around Justice Clarence Thomas just gets worse and worse by the day.”

    — Senate Judiciary Chair Dick Durbin, telling CNN on Sunday that “everything is on the table” as the panel scrutinizes new ethics concerns around Supreme Court Justice Clarence Thomas. The conservative justice is receiving criticism after a bombshell ProPublica report detailed he accepted several lavish trips and gifts from GOP megadonor Harlan Crow. Thomas also accepted free rent from the Republican billionaire for his mother and allowed him to pay the boarding school tuition for his grandnephew, according to ProPublica.

    dick durbin sotu iso 5 7 23

    ‘It embarrasses me’: Senate Judiciary chair on Justice Thomas revelations

    Check your local forecast here>>>

    Parrots learn to call their feathered friends on video chat

    These parrots were taught to ring a bell whenever they want to caw their fellow bird friends! See them in action. (Click to view)

    Parrots Video Chat 3

    Parrots learn to call their feathered friends on video chat

    Source link

  • Fuzzy first photo of a black hole gets a sharp makeover | CNN

    Fuzzy first photo of a black hole gets a sharp makeover | CNN

    Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more.



    CNN
     — 

    The first photo ever taken of a black hole looks a little sharper now.

    Originally released in 2019, the unprecedented historic image of the supermassive black hole at the center of the galaxy Messier 87 captured an essentially invisible celestial object using direct imaging.

    The image presented the first direct visual evidence that black holes exist, showcasing a central dark region encapsulated by a ring of light that looks brighter on one side. Astronomers nicknamed the object the “fuzzy, orange donut.”

    Now, scientists have used machine learning to give the image a cleaner upgrade that looks more like a “skinny” doughnut, researchers said. The central region is darker and larger, surrounded by a bright ring as hot gas falls into the black hole in the new image.

    In 2017, astronomers set out to observe the invisible heart of the massive galaxy Messier 87, or M87, near the Virgo galaxy cluster 55 million light-years from Earth.

    The Event Horizon Telescope Collaboration, called EHT, is a global network of telescopes that captured the first photograph of a black hole. More than 200 researchers worked on the project for more than a decade. The project was named for the event horizon, the proposed boundary around a black hole that represents the point of no return where no light or radiation can escape.

    To capture an image of the black hole, scientists combined the power of seven radio telescopes around the world using Very-Long-Baseline-Interferometry, according to the European Southern Observatory, which is part of the EHT. This array effectively created a virtual telescope around the same size as Earth.

    Data from the original 2017 observation was combined with a machine learning technique to capture the full resolution of what the telescopes saw for the first time. The new, more detailed image, along with a study, was released on Thursday in The Astrophysical Journal Letters.

    “With our new machine learning technique, PRIMO, we were able to achieve the maximum resolution of the current array,” said lead study author Lia Medeiros, astrophysics postdoctoral fellow in the School of Natural Sciences at the Institute for Advanced Study in Princeton, New Jersey, in a statement.

    “Since we cannot study black holes up-close, the detail of an image plays a critical role in our ability to understand its behavior. The width of the ring in the image is now smaller by about a factor of two, which will be a powerful constraint for our theoretical models and tests of gravity.”

    Medeiros and other EHT members developed Principal-component Interferometric Modeling, or PRIMO. The algorithm relies on dictionary learning in which computers create rules based on large amounts of material. If a computer is given a series of images of different bananas, combined with some training, it might be able to tell if an unknown image does or doesn’t contain a banana.

    Computers using PRIMO analyzed more than 30,000 high-resolution simulated images of black holes to pick out common structural details. This allowed the machine learning essentially to fill in the gaps of the original image.

    “PRIMO is a new approach to the difficult task of constructing images from EHT observations,” said Tod Lauer, an astronomer at the National Science Foundation’s National Optical-Infrared Astronomy Research Laboratory, or NOIRLab. “It provides a way to compensate for the missing information about the object being observed, which is required to generate the image that would have been seen using a single gigantic radio telescope the size of the Earth.”

    Black holes are made up of huge amounts of matter squeezed into a small area, according to NASA, creating a massive gravitational field that draws in everything around it, including light. These powerful celestial phenomena also have a way of superheating the material around them and warping space-time.

    Material accumulates around black holes, is heated to billions of degrees and reaches nearly the speed of light. Light bends around the gravity of the black hole, which creates the photon ring seen in the image. The black hole’s shadow is represented by the dark central region.

    The visual confirmation of black holes also acts as confirmation of Albert Einstein’s theory of general relativity. In the theory, Einstein predicted that dense, compact regions of space would have such intense gravity that nothing could escape them. But if heated materials in the form of plasma surround the black hole and emit light, the event horizon could be visible.

    The new image can help scientists make more accurate measurements of the black hole’s mass. Researchers can also apply PRIMO to other EHT observations, including those of the black hole at the center of our Milky Way galaxy.

    “The 2019 image was just the beginning,” Medeiros said. “If a picture is worth a thousand words, the data underlying that image have many more stories to tell. PRIMO will continue to be a critical tool in extracting such insights.”

    Source link

  • Video: A pause on AI development, why it’s the worst time to buy a car in decades on CNN Nightcap | CNN Business

    Video: A pause on AI development, why it’s the worst time to buy a car in decades on CNN Nightcap | CNN Business

    The dangers of AI, the worst time to buy a car in decades, and the next Elizabeth Holmes?

    NYU’s Gary Marcus tells “Nightcap’s” Jon Sarlin why he signed an open letter calling for a six-month pause on AI development. Plus, CNN’s Peter Valdes-Dapena explains why car prices may never go back to where they were pre-Covid. And Forbes’ Alexandra Levine details the arrest of Charlie Javice, the 31-year-old fintech founder who sold her company to JPMorgan and now stands accused of fraud. To get the day’s business headlines sent directly to your inbox, sign up for the Nightcap newsletter.

    Source link

  • When you’re talking to a chatbot, who’s listening? | CNN Business

    When you’re talking to a chatbot, who’s listening? | CNN Business


    New York
    CNN
     — 

    As the tech sector races to develop and deploy a crop of powerful new AI chatbots, their widespread adoption has ignited a new set of data privacy concerns among some companies, regulators and industry watchers.

    Some companies, including JPMorgan Chase

    (JPM)
    , have clamped down on employees’ use of ChatGPT, the viral AI chatbot that first kicked off Big Tech’s AI arms race, due to compliance concerns related to employees’ use of third-party software.

    It only added to mounting privacy worries when OpenAI, the company behind ChatGPT, disclosed it had to take the tool offline temporarily on March 20 to fix a bug that allowed some users to see the subject lines from other users’ chat history.

    The same bug, now fixed, also made it possible “for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” OpenAI said in a blog post.

    And just last week, regulators in Italy issued a temporary ban on ChatGPT in the country, citing privacy concerns after OpenAI disclosed the breach.

    “The privacy considerations with something like ChatGPT cannot be overstated,” Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN. “It’s like a black box.”

    With ChatGPT, which launched to the public in late November, users can generate essays, stories and song lyrics simply by typing up prompts.

    Google and Microsoft have since rolled out AI tools as well, which work the same way and are powered by large language models that are trained on vast troves of online data.

    When users input information into these tools, McCreary said, “You don’t know how it’s then going to be used.” That raises particularly high concerns for companies. As more and more employees casually adopt these tools to help with work emails or meeting notes, McCreary said, “I think the opportunity for company trade secrets to get dropped into these different various AI’s is just going to increase.”

    Steve Mills, the chief AI ethics officer at Boston Consulting Group, similarly told CNN that the biggest privacy concern that most companies have around these tools is the “inadvertent disclosure of sensitive information.”

    “You’ve got all these employees doing things which can seem very innocuous, like, ‘Oh, I can use this to summarize notes from a meeting,’” Mills said. “But in pasting the notes from the meeting into the prompt, you’re suddenly, potentially, disclosing a whole bunch of sensitive information.”

    If the data people input is being used to further train these AI tools, as many of the companies behind the tools have stated, then you have “lost control of that data, and somebody else has it,” Mills added.

    OpenAI, the Microsoft-backed company behind ChatGPT, says in its privacy policy that it collects all kinds of personal information from the people that use its services. It says it may use this information to improve or analyze its services, to conduct research, to communicate with users, and to develop new programs and services, among other things.

    The privacy policy states it may provide personal information to third parties without further notice to the user, unless required by law. If the more than 2,000-word privacy policy seems a little opaque, that’s likely because this has pretty much become the industry norm in the internet age. OpenAI also has a separate Terms of Use document, which puts most of the onus on the user to take appropriate measures when engaging with its tools.

    OpenAI also published a new blog post Wednesday outlining its approach to AI safety. “We don’t use data for selling our services, advertising, or building profiles of people — we use data to make our models more helpful for people,” the blogpost states. “ChatGPT, for instance, improves by further training on the conversations people have with it.”

    Google’s privacy policy, which includes its Bard tool, is similarly long-winded, and it has additional terms of service for its generative AI users. The company states that to help improve Bard while protecting users’ privacy, “we select a subset of conversations and use automated tools to help remove personally identifiable information.”

    “These sample conversations are reviewable by trained reviewers and kept for up to 3 years, separately from your Google Account,” the company states in a separate FAQ for Bard. The company also warns: “Do not include info that can be used to identify you or others in your Bard conversations.” The FAQ also states that Bard conversations are not being used for advertising purposes, and “we will clearly communicate any changes to this approach in the future.”

    Google also told CNN that users can “easily choose to use Bard without saving their conversations to their Google Account.” Bard users can also review their prompts or delete Bard conversations via this link. “We also have guardrails in place designed to prevent Bard from including personally identifiable information in its responses,” Google said.

    “We’re still sort of learning exactly how all this works,” Mills told CNN. “You just don’t fully know how information you put in, if it is used to retrain these models, how it manifests as outputs at some point, or if it does.”

    Mills added that sometimes users and developers don’t even realize the privacy risks that lurk with new technologies until it’s too late. An example he cited was early autocomplete features, some of which ended up having some unintended consequences like completing a social security number that a user began typing in — often to the alarm and surprise of the user.

    Ultimately, Mills said, “My view of it right now, is you should not put anything into these tools you don’t want to assume is going to be shared with others.”

    Source link

  • Using artificial intelligence and archival news articles, this teen found that Black homicide victims were less humanized in news coverage | CNN

    Using artificial intelligence and archival news articles, this teen found that Black homicide victims were less humanized in news coverage | CNN



    CNN
     — 

    Using artificial intelligence and archival news articles, a teenager in Northern Virginia created a program to measure media biases – and in researching older news articles, she found that Black homicide victims were less likely to be humanized in news coverage.

    Emily Ocasio, an 18-year-old from Falls Church, Virginia, created an AI program that analyzed FBI homicide records between 1976 and 1984 and their corresponding coverage published in The Boston Globe to determine whether victims were presented in a humanizing or impersonal way.

    After analyzing 5,042 entries, the results showed that Black men under the age of 18 were 30% less likely to receive humanizing coverage than their White counterparts, Ocasio told CNN. Black women were 23% less likely to be humanized in news stories, Ocasio added.

    A news article was considered humanizing when it mentioned additional information about the victim and presented them “as a person, not just a statistic,” Ocasio said in her project presentation.

    Her findings have not been reviewed by the larger scientific community, but she told CNN she hopes to expand her research and get it published in a scientific journal.

    Ocasio’s project earned her second place in the prestigious Regeneron Science Talent Search on March 14 as well as a $175,000 scholarship.

    Every year about 1,900 high school students from across the country participate in the competition, which started in 1942 and seeks to serve as a platform for young scientists to share original research.

    Ocasio was among 40 finalists from more than 2,000 applications, according to Maya Ajmera, president and CEO of the Society for Science and executive publisher of Science News, two of the competition’s sponsors.

    “By using AI to document these biases, Emily shows that it can be safely used to help society answer complex social science questions,” her biography on the Society for Science website says.

    Ocasio said she has always been interested in social justice and science and saw this project as an opportunity to combine them. “Without the research, and without the statistics, you have no ability of understanding that entire communities are being left behind,” she said.

    Ocasio analyzed The Boston Globe’s news coverage because the newspaper had digital copies of its articles for the ’70s to ‘80s time period she focused on for her project, she said. CNN has reached out to the Boston Globe for comment.

    Despite her findings, Ocasio believes science can’t explain everything: “You can never run an experiment in a lab that tells you about how racism works in society.”

    Ocasio, who has Puerto Rican heritage, said her own experiences helped shape her perspective of different races and cultures, and drew her to researching racism and inequalities. She wants to replicate her research to analyze other news outlets as well, she said.

    The talent search’s first-place winner, Neel Moudgal, told CNN the research done by the teenagers across the US is essential to helping solve some of society’s greatest challenges.

    Neel Moudgal won first place in the Regeneron Science Talent Search.

    “I firmly believe that science is going to be the solution to a lot of our problems,” Moudgal said. His prize-winning project was a computer model that predicts the structure of RNA molecules to help develop tests and drugs for diseases such as cancer, autoimmune diseases, and viral infections.

    Ajmera said seeing such projects from high school students gives her “an enormous hope for the future.”

    “We’re looking for the future scientific leaders of this country,” she said.

    Source link

  • Google begins rolling out its ChatGPT rival | CNN Business

    Google begins rolling out its ChatGPT rival | CNN Business



    CNN
     — 

    Google is opening up access to Bard, its new AI chatbot tool that directly competes with ChatGPT.

    Starting Tuesday, users can join a waitlist to gain access to Bard, which promises to help users outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

    A company representative told CNN it will be a separate, complementary experience to Google Search, and users can also visit Search to check its responses or sources. Google said in a blog post it plans to “thoughtfully” add large language models to search “in a deeper way” at a later time.

    Google said it will start rolling out the tool in the United States and United Kingdom, and plans to expand it to more countries and languages in the future.

    The news comes as Google, Microsoft, Facebook and other tech companies race to develop and deploy AI-powered tools in the wake of the recent, viral success of ChatGPT. Last week, Google announced it is also bringing AI to its productivity tools, including Gmail, Sheets and Docs. Shortly after, Microsoft announced a similar AI upgrade to its productivity tools.

    Google unveiled Bard last month in a demo that was later called out for providing an inaccurate response to a question about a telescope. Shares of Google’s parent company Alphabet fell 7.7% that day, wiping $100 billion off its market value.

    Like ChatGPT, which was released publicly in late November by AI research company OpenAI, Bard is built on a large language model. These models are trained on vast troves of data online in order to generate compelling responses to user prompts. The immense attention on ChatGPT reportedly prompted Google’s management to declare a “code red” situation for its search business.

    But Bard’s blunder highlighted the challenge Google and other companies face with integrating the technology into their core products. Large language models can present a handful of issues, such as perpetuating biases, being factually incorrect and responding in an aggressive manner.

    Google acknowledged in the blog post Tuesday that AI tools are “not without their faults.” The company said it continues to use human feedback to improve its systems and add new “guardrails, like capping the number of exchanges in a dialogue, to try to keep interactions helpful and on topic.”

    Last week, OpenAI released GPT-4, the next-generation version of the technology that powers ChatGPT and Microsoft’s new Bing browser, with similar safeguards. In the first day after it was unveiled, GPT-4 stunned many users in early tests and a company demo with its ability to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    Source link

  • The way we work is about to change | CNN Business

    The way we work is about to change | CNN Business


    New York
    CNN
     — 

    In just a few months, you’ll be able to ask a virtual assistant to transcribe meeting notes during a work call, summarize long email threads to quickly draft suggested replies, quickly create a specific chart in Excel, and turn a Word document into a PowerPoint presentation in seconds.

    And that’s just on Microsoft’s 365 platforms.

    Over the past week, a rapidly evolving artificial intelligence landscape seemed to leap ahead again. Microsoft and Google each unveiled new AI-powered features for their signature productivity tools and OpenAI introduced its next-generation version of the technology that underpins its viral chatbot tool, ChatGPT.

    Suddenly, AI tools, which have long operated in the background of many services, are now more powerful and more visible across a wide and growing range of workplace tools.

    Google’s new features, for example, promise to help “brainstorm” and “proofread” written work in Docs. Meanwhile, if your workplace uses popular chat platform Slack, you’ll be able to have its ChatGPT tool talk to colleagues for you, potentially asking it to write and respond to new messages and summarize conversations in channels.

    OpenAI, Microsoft and Google are at the forefront of this trend, but they’re not alone. IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.

    The pitch from tech companies is clear: AI can make you more productive and eliminate the grunt work. As Microsoft CEO Satya Nadella put it during a presentation on Thursday, “We believe this next generation of AI will unlock a new wave of productivity growth: powerful copilots designed to remove the drudgery from our daily tasks and jobs, freeing us to rediscover the joy of creation.”

    But the sheer number of new options hitting the market is both dizzying and, as with so much else in the tech industry over the past decade, raises questions of whether they will live up to the hype or cause unintended consequences, including enabling cheating and eliminating the need for certain roles (though that may be the intent of some adopters).

    Even the promise of greater productivity is unclear. The rise of AI-generated emails, for example, might boost productivity for the sender but decrease it for recipients flooded with longer-than-necessary computer-generated messages. And of course just because everyone has the option to use a chatbot to communicate with colleagues doesn’t mean all will chose to do so.

    Integrating this technology “into the foundational pieces of productivity software that most of us use everyday will have a significant impact on the way we work,” said Rowan Curran, an analyst at Forrester. “But that change will not wash over everything and everyone tomorrow — learning how to best make use of these capabilities to enhance and adjust our existing workflows will take time.”

    Anyone who has ever used an autocomplete option when typing an email or sending a message has already experienced how AI can speed up tasks. But the new tools promise to go far beyond that.

    The renewed wave of AI product launches kicked off nearly four months ago when OpenAI released a version of ChatGPT on a limited basis, stunning users with generating human-sounding responses to user prompts, passing exams at prestigious universities and writing compelling essays on a range of topics.

    Since then, the technology — which Microsoft made a “multibillion dollar” investment in earlier this year — has only improved. Earlier this week, OpenAI unveiled GPT-4, a more powerful version of the technology that underpins ChatGPT, and which promises to blow previous iterations out of the water.

    In early tests and a company demo, GPT-4 was used to draft lawsuits, build a working website from a hand-drawn sketch and recreate iconic games such as Pong, Tetris or Snake with very little to no prior coding experience.

    GPT-4 is a large language model that has been trained on vast troves of online data to generate responses to user prompts.

    It’s the same technology that underpins two new Microsoft features:”Co-pilot,” which will help edit, summarize, create and compare documents across its platforms, and Business Chat, an agent that essentially rides along with the user as they work and tries to understand and make sense of their Microsoft 365 data.

    The agent will know, for example, what’s in a user’s email and on their calendar for the day, as well as the documents they’ve been working on, the presentations they’ve been making, the people they’re meeting with, and the chats happening on their Teams platform, according to the company. Users can then ask Business Chat to do tasks such as write a status report by summarizing all of the documents across platforms on a certain project, and then draft an email that could be sent to their team with an update.

    Curran said just how much these AI-powered tools will change work depends on the application. For example, a word processing application could help generate outlines and drafts, a slideshow program may help speed along the design and content creation process, and a spreadsheet app should help more users interact with and make data-driven decisions. The latter he believes will make the most significant impact to the workplace in both the short and long-term.

    The discussion of how these technologies will impact jobs “should focus on job tasks rather than jobs as a whole,” he said.

    Although OpenAI’s GPT-4 update promises fixes to some of its biggest challenges — from its potential to perpetuate biases, sometimes being factually incorrect and responding in an aggressive manner — there’s still the possibility for some of these issues to find their way into the workplace, especially when it comes to interacting with others.

    Arijit Sengupta, CEO and founder of AI solutions company Aible, said a problem with any large language model is that it tries to please the user and typically accepts the premise of the user’s statements.

    “If people start gossiping about something, it will accept it as the norm and then start generating content [related to that],” said Sengupta, adding that it could escalate interpersonal issues and turn into bullying at the office.

    In a tweet earlier this week, OpenAI CEO Sam Altman wrote the technology behind these systems is “still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it.” The company reiterated in a blog post that “great care should be taken when using language model outputs, particularly in high-stakes contexts.”

    Arun Chandrasekaran, an analyst at Gartner Research, said organizations will need to educate their users on what these solutions are good at and what their limitations are.

    “Blind trust in these solutions is as dangerous as complete lack of faith in the effectiveness of it,” Chandrasekaran said. “Generative AI solutions can also make up facts or present inaccurate information from time to time – and organizations need to be prepared to mitigate this negative impact.”

    At the same time, many of these applications are not up to date (GPT-4’s data that it’s trained on cuts off around September 2021). The onus will have to be on the users to do everything from double check the accuracy to change the language to reflect the tone they want. It will also be important to get buy-in and support across workplaces for the tools to take off.

    “Training, education and organizational change management is very important to ensure that employees are supportive of the efforts and the tools are used in the way they were intended to,” Chandrasekaran said.

    Source link

  • Microsoft is bringing ChatGPT technology to Word, Excel and Outlook | CNN Business

    Microsoft is bringing ChatGPT technology to Word, Excel and Outlook | CNN Business



    CNN
     — 

    Microsoft on Thursday outlined its plans to bring artificial intelligence to its most recognizable productivity tools, including Outlook, PowerPoint, Excel and Word, with the promise of changing how millions do their work every day.

    At an event on Thursday, the company announced that Microsoft 365 users will soon be able to use what the company is calling an AI “Co-pilot,” which will help edit, summarize, create and compare documents. But don’t call it Clippy. The new features, which are built on the same technology that underpins ChatGPT, are far more powerful (and less anthropomorphized) than its wide-eyed, paperclip-shaped predecessor.

    With the new features, users will be able to transcribe meeting notes during a Skype call, summarize long email threads to quickly draft suggested replies, request to create a specific chart in Excel, and turn a Word document into a PowerPoint presentation in seconds.

    Microsoft is also introducing a concept called Business Chat, an agent that essentially rides along with the user as they work and tries to understand and make sense of their Microsoft 365 data. The agent will know what’s in a user’s email and on their calendar for the day as well as the documents they’ve been working on, the presentations they’ve been making, the people they’re meeting with, and the chats happening on their Teams platform, according to the company. Users can then ask Business Chat to do tasks such as write a status report by summarizing all of the documents across platforms on a certain project, and then draft an email that could be sent to their team with an update.

    Microsoft’s announcement comes a month after it brought similar AI-powered features to Bing and amid a renewed arms race in the tech industry to develop and deploy AI tools that can change how people work, shop and create. Earlier this week, rival Google announced it is also bringing AI to its productivity tools, including Gmail, Sheets and Docs.

    The news also comes two days after OpenAI, the company behind Microsoft’s artificial intelligence technology and the creator of ChatGPT, unveiled its next-generation model, GPT-4. The update has stunned many users in early tests and a company demo with its ability to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    OpenAI said it added more “guardrails” to keep conversations on track and has worked to make the tool less biased. But the update, and the moves by larger tech companies to integrate this technology, could add to challenging questions around how AI tools can upend professions, enable students to cheat, and shift our relationship with technology. Microsoft’s new Bing browser has already been using GPT-4, for better or worse.

    A Microsoft spokesperson said 365 users accessing the new AI tools should be reminded the technology is a work in progress and information will need to be double checked. Although OpenAI has made vast improvements to its latest model, GPT-4 has similar limitations to previous versions. The company said it can still make “simple reasoning errors” or be “overly gullible in accepting obvious false statements from a user,” and does not fact check.

    Still, Microsoft believes the changes will improve the experience of people at work in a significant way by allowing them to do tasks easier and less tedious, freeing them up to be more analytical and creative.

    Source link

  • Baidu stock rebounds after falling sharply in wake of ChatGPT-style bot demo | CNN Business

    Baidu stock rebounds after falling sharply in wake of ChatGPT-style bot demo | CNN Business


    Hong Kong
    CNN
     — 

    Shares in Chinese search giant Baidu rebounded sharply a day after it unveiled ERNIE Bot, its answer to the ChatGPT craze.

    Its stock soared 14.3% on Friday in Hong Kong, making it the biggest winner in the Hang Seng Index

    (HSI)
    . They also gained 3.8% in New York during US trade Thursday.

    A day earlier, Baidu

    (BIDU)
    was the biggest loser of the same index. Its Hong Kong shares fell 6.4% after a public demonstration of its bot failed to impress investors. Since February, more than 650 companies had joined the ERNIE ecosystem, CEO Robin Li said during the presentation.

    The reversal came after the company said more than 30,000 businesses had signed up to test out its chatbot service within two hours of its demonstration.

    “The high degree of enterprise interest is positive, and we expect Baidu to continue to capture China’s enterprise demand for generative AI,” Esme Pau, Macquarie’s head of China and Hong Kong internet and digital assets, told CNN.

    She said the company’s shares were bouncing back Friday as some users, including analysts, shared positive feedback of their own experiences trying out ERNIE, which suggested the bot had more advanced capabilities.

    During the presentation, Baidu showed how its chatbot could generate a company newsletter, come up with a corporate slogan and solve a math riddle.

    But its stock slumped on Thursday because the demo was “pre-recorded, and not live, which makes investors skeptical about the robustness of the ERNIE Bot,” according to Pau.

    Baidu’s demonstration also came just days after the launch of GPT-4, which “raised the bar for ERNIE,” she added.

    GPT-4 is the latest version of the artificial intelligence technology behind ChatGPT. The service has impressed users this week with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Pau noted that Baidu’s shares were already “down modestly” before showing off its software on Thursday, highlighting pressure from investors who had raised expectations following the GPT-4 launch.

    “ERNIE also does not have the [same] multilingual capability as GPT-4, and has yet to improve for English queries,” she said. “Also, the ERNIE launch did not provide sufficient quantifiable metrics compared to the GPT-4 launch earlier this week.”

    Like ChatGPT, ERNIE is based on a language model, which is trained on vast troves of data online in order to generate compelling responses to user prompts.

    Li said Baidu’s expectations for ERNIE were “close to ChatGPT, or even GPT-4.”

    But he acknowledged the software was “not perfect yet,” adding it was being launched first to enterprise users. The service is not yet available to the public.

    Baidu announced its chatbot last month. Some critics say the service will add fuel to an existing US-China rivalry in emerging technologies.

    Li tried to shake off that comparison during the launch, saying the bot “is not a tool for the confrontation between China and the United States in science and technology, but a product of generations of Baidu technicians chasing the dream of changing the world with technology.”

    “It is a brand new platform for us to serve hundreds of millions of users and empower thousands of industries,” he said.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses.

    “ERNIE Bot can produce text, images, audio and video given a text prompt, and is even capable of delivering voice in several local dialects such as the Sichuan dialect,” the company said in a statement.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    Baidu isn’t the only Chinese firm working on such technology. Last month, Alibaba

    (BABA)
    announced plans to launch its own ChatGPT-style tool, adding to the list of tech giants jumping on the chatbot bandwagon.

    So far, Baidu has a first mover advantage in the space in China, according to analysts.

    “Our view is ERNIE is three to six months ahead of its potential contenders,” said Pau.

    — CNN’s Mengchen Zhang contributed to this report.

    Source link

  • What metaverse? Meta says its single largest investment is now in ‘advancing AI’ | CNN Business

    What metaverse? Meta says its single largest investment is now in ‘advancing AI’ | CNN Business



    CNN
     — 

    Roughly a year-and-a-half after Facebook renamed itself “Meta” and said it would go all-in on building a future version of the internet dubbed the metaverse, the tech giant now says its top investment priority will be advancing artificial intelligence.

    In a letter to staff Tuesday, CEO Mark Zuckerberg announced plans to lay off another 10,000 employees in the coming months, and doubled down on his new focus of “efficiency” for the company. The pivot to efficiency, first announced last month in Meta’s quarterly earnings call, comes after years of investing heavily in growth, including in areas with unproven potential like virtual reality.

    Now, Zuckerberg says the company will focus mostly on cutting costs and streamlining projects. Building the metaverse “remains central to defining the future of social connection,” Zuckerberg wrote, but that isn’t where Meta will be putting most of its capital.

    “Our single largest investment is in advancing AI and building it into every one of our products,” Zuckerberg said Tuesday. He nodded to how AI tools can help users of its apps express themselves and “discover new content,” but also said that new AI tools can be used to increase efficiencies internally by helping “engineers write better code faster.”

    The comments come after what the CEO described as a “humbling wake-up call” last year, as the “world economy changed, competitive pressures grew, and our growth slowed considerably.”

    Meta and its predecessor Facebook have been involved in AI research for years, but the remarks come amid a heightened AI frenzy in the tech world, kicked off in late November when Microsoft-backed OpenAI publicly released ChatGPT. The technology quickly went viral for its ability to generate compelling, human-sounding responses to user prompts and then kicked off an apparent AI arms race among tech companies. Microsoft announced in early February that it was incorporating the tech behind ChatGPT into its search engine, Bing. A day before Microsoft’s announcement, Google unveiled its own AI-powered tool called Bard. And not to be left behind, Meta announced late last month that it was forming a “top-level product group” to “turbocharge” the company’s work on AI tools.

    “I do think it is a good thing to focus on AI,” Ali Mogharabi, a senior equity analyst at Morningstar, told CNN of Zuckerberg’s comments. Mogharabi said Meta’s investments in AI “has benefits on both ends” because it can improve efficiency for engineers creating products, and because incorporating AI features into Meta’s lineup of apps will potentially create more engagement time for users, which can then drive advertising revenue.

    And in the long run, Mogharabi said, “A lot of the investments in AI, and a lot of enhancements that come from those investments in AI, could actually be applicable to the entire metaverse project.”

    But Zuckerberg’s emphasis on investing in AI, and using the buzzy technology’s tools to make the company more efficient and boost its bottom line, is also “what the shareholders and the market want to hear,” Mogharabi said. Many investors had previously griped at the company’s metaverse ambitions and spending. In 2022, Meta lost more than $13.7 billion in its “Reality Labs” unit, which houses its metaverse efforts.

    And investors appear to welcome Zuckerberg’s shift in focus from the metaverse to efficiency. After taking a beating in 2022, shares for Meta have surged more than 50% since the start of the year.

    Angelo Zino, a senior equity analyst at CFRA Research, said on Tuesday that the second round of layoffs at Meta “officially make us convinced that Mark Zuckerberg has completely switched gears, altering the narrative of the company to one focused on efficiencies rather than looking to grow the metaverse at any cost.”

    Source link

  • The technology behind ChatGPT is about to get even more powerful | CNN Business

    The technology behind ChatGPT is about to get even more powerful | CNN Business



    CNN
     — 

    Nearly four months after OpenAI stunned the tech industry with ChatGPT, the company is releasing its next-generation version of the technology that powers the viral chatbot tool.

    In a blog post on Tuesday, OpenAI unveiled GPT-4, which the company says is capable of performing well on a range of standardized tests and is also less likely to “go off the guardrails” with its responses, as some users have previously experienced.

    OpenAI said the updated technology passed a simulated law school bar exam with a score around the top 10% of test takers; by contrast, the prior version, GPT-3.5, scored around the bottom 10%. GPT-4 can also read, analyze or generate up to 25,000 words of text, and write code in all major programming languages, according to the company.

    OpenAI described the update as the “latest milestone” for the company. Although it is still “less capable” than humans in many real-world scenarios, it exhibits “human-level performance on various professional and academic benchmarks,” according to the company.

    GPT-4 is the latest version of OpenAI’s large language model, which is trained on vast amounts of online data to generate compelling responses to user prompts. The updated version, which is now available via a waitlist, is already making its way into some third-party products, including Microsoft’s AI-powered Bing.

    “We are happy to confirm that the new Bing is running on GPT-4, which we’ve customized for search,” Microsoft said on Tuesday. “If you’ve used the new Bing preview at any time in the last five weeks, you’ve already experienced an early version of this powerful model.”

    While ChatGPT has impressed many users with its ability to generate original essays, stories and song lyrics in response to user prompts since its November 2022 launch, it has also raised some concerns. AI chatbots, including tools from Microsoft and Google, have been called out in recent weeks for being emotionally reactive, making factual errors and engaging in outright “hallucinations,” as the industry calls it.

    GPT-4 has similar limitations as earlier GPT models. “It is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it,” Sam Altman, CEO of OpenAI, wrote in a series of tweets Tuesday announcing the update.

    But there are noticeable improvements, he said. “It is more creative than previous models, it hallucinates significantly less, and it is less biased,” he wrote.

    Still, the company said, “great care should be taken when using language model outputs, particularly in high-stakes contexts.”

    The news comes two weeks after OpenAI announced it is opening up access to its ChatGPT tool to third-party businesses, paving the way for the chatbot to be integrated into numerous apps and services.

    Instacart, Snap and tutor app Quizlet are among the early partners experimenting with the tool. In January, Microsoft confirmed it is making a “multibillion dollar” investment in OpenAI and has since rolled out the technology to some of its products, including its search engine Bing.

    Source link

  • Why you’re about to see ChatGPT in more of your apps | CNN Business

    Why you’re about to see ChatGPT in more of your apps | CNN Business



    CNN
     — 

    Prepare to see ChatGPT responses in even more places.

    OpenAI is opening up access to its ChatGPT tool to third-party businesses, paving the way for the viral AI chatbot to be integrated into numerous apps and services.

    The company on Wednesday said developers can now access ChatGPT’s application programming interface, or API, which will allow companies to integrate the tool’s chat functionality and answers into their platforms. Instacart, Snap and tutor app Quizlet are among the early partners experimenting with adding ChatGPT.

    The move comes three months after OpenAI publicly released ChatGPT and stunned many users with the tool’s impressive ability to generate original essays, stories and song lyrics in response to user prompts. The initial wave of attention on the tool helped renew an arms race among tech companies to develop and deploy similar AI tools in their products.

    The initial batch of companies tapping into OpenAI’s API each have slightly different visions for how to incorporate ChatGPT. Taken together, however, these services may test just how useful AI chatbots can really be in our everyday life and how much people want to interact with them for customer service and other uses across their favorite apps.

    Snap, the company behind Snapchat, plans to offer a customizable chatbot that offers recommendations, helps users make plans or even writes a haiku in seconds. Quizlet, which has more than 60 million students using the service, is introducing a chatbot that can ask questions based on study materials to help students prepare for exams.

    Shopify’s consumer app, Shop, and Instacart are both launching chatbots that could help inform customers’ shopping decisions. Instacart plans to use the tool to allow users to ask questions such as “How do I make great fish tacos?” or “What’s a healthy lunch for my kids?” Instacart also plans to launch an “Ask Instacart” chatbot later this year.

    There is clearly demand for other businesses to follow suit. Dating website OkCupid has already experimented with using ChatGPT to write matching questions. Other companies like Fanatics have previously expressed interest in using similar technology to power a customer service chatbot.

    “With the level of user interest and use, companies don’t want to be left behind, so there’s a base incentive to embrace new tech to remain competitive,” said Michael Inouye, an analyst at ABI Research. “If users engage more with a service that means more data for advertising, marketing of goods and services, and potentially stronger customer relationships.”

    There are some risks, however. Although ChatGPT has gained significant traction among users, it has also raised some concerns, including about its potential to perpetuate biases and spread misinformation. Some school systems, such as in New York and Seattle, banned the use of ChatGPT in the classroom over concerns about students cheating. And JPMorgan Chase is temporarily clamping down on employee use due to limits on third-party software due to compliance concerns.

    Source link

  • Mark Zuckerberg looks to ‘turbocharge’ Meta’s AI tools after viral success of ChatGPT | CNN Business

    Mark Zuckerberg looks to ‘turbocharge’ Meta’s AI tools after viral success of ChatGPT | CNN Business



    CNN
     — 

    Mark Zuckerberg said Meta is creating a new “top-level product group” to “turbocharge” the company’s work on AI tools, as it attempts to keep pace with a renewed AI arms race among Big Tech companies.

    In a Facebook post late Monday, Zuckerberg said the elite new group will initially be formed by pulling together teams across the company currently working on generative AI, the technology that underpins the viral AI chatbot, ChatGPT. This group will be “focused on building delightful experiences around this technology into all of our different products,” Zuckerberg said, starting with “creative and expressive tools.”

    “Over the longer term, we’ll focus on developing AI personas that can help people in a variety of ways,” Zuckerberg said. Those AI features may include new Instagram filters as well as chat tools in WhatsApp and Messenger, he said.

    The planned efforts come amid a heightened AI frenzy in the tech world, kicked off in late November when Microsoft-backed OpenAI released ChatGPT publicly. The tool quickly went viral for its ability to generate compelling, human-sounding responses to user prompts. Microsoft later announced it was incorporating the tech behind ChatGPT into its search engine Bing. A day before Microsoft’s announcement, Google unveiled its own AI-powered tool called Bard.

    Meta, by comparison, has been quiet so far. Yann LeCunn, Meta’s Chief AI scientist, has expressed some skepticism surrounding the ChatGPT hype. “It’s not a particularly big step towards, you know, more like human level intelligence,” LeCunn said in one interview late last month. “From the scientific point of view, ChatGPT is not a particularly interesting scientific advance,” he added.

    Generative AI tools are built on large language models that have been trained on vast troves of online data to create written and visual responses to user prompts. But these systems also have the potential to perpetuate biases and misinformation. Already, both Microsoft and Google’s AI tools have run into controversies for producing some inaccurate or uncanny responses.

    As with Microsoft and Google, there are some risks for Meta in embracing this technology. Last year, before the ChatGPT hype, Meta publicly released an AI-powered chatbot dubbed “BlenderBot 3.” It didn’t take long, however, for the chatbot to start making offensive comments.

    In his post Monday, Zuckerberg said: “We have a lot of foundational work to do before getting to the really futuristic experiences, but I’m excited about all of the new things we’ll build along the way.”

    Source link