ReportWire

Tag: iab-artificial intelligence

  • Microsoft, Google post strong quarterly sales growth as Big Tech continues its comeback | CNN Business

    Microsoft, Google post strong quarterly sales growth as Big Tech continues its comeback | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Big tech companies are continuing a turnaround from last year, as Alphabet, Microsoft and Snap kicked off earnings season with strong sales results for the quarter ended in September.

    Google parent company Alphabet on Tuesday reported quarterly sales of $76.69 billion, up 11% from the same period in the prior year. The company also posted profits of $19.69 billion for the quarter.

    Meanwhile, Microsoft posted 13% year-on-year sales growth to $56.5 billion, also beating expectations. Microsoft’s quarterly profits hit $22.3 billion, up 27% from the year-ago period.

    Snapchat parent Snap on Tuesday reported a return to sales growth in the September quarter, after two consecutive quarters of declining sales. The company reported revenue of nearly $1.2 billion, an increase of 5% from the same period in the prior year and ahead of analysts’ projections. The company reported a net loss of $368 million.

    The strong results come after Microsoft, Alphabet, Snap and other tech companies carried out mass layoffs and other cost cutting moves over the past year following a difficult 2022 when advertisers and other clients cut back on their spending due to concerns over the macroeconomic environment.

    Despite beating Wall Street’s sales expectations, shares of both Alphabet (GOOGL) and Snap (SNAP) each dipped around 5% in after-hours trading following the reports, although Snap’s quickly regained some ground. Microsoft (MSFT) shares gained around 4% in after-hours trading.

    “Q3 tech season has been quite strong thus far,” Tejas Dessai, research analyst at investment fund GlobalX said in a statement. “These numbers clearly defy concerns of near term economic weakness looming.”

    Google’s advertising business generated quarterly revenue of $59.6 billion, up from $54.5 billion in the prior year. YouTube ads, meanwhile, garnered some $7.9 billion in revenue, up roughly 12% year-over-year.

    YouTube Shorts, the company’s TikTok competitor, hit a milestone 70 billion daily views last quarter, Alphabet CEO Sundar Pichai said on a call with analysts Tuesday afternoon.

    Google’s cloud business, however, reported revenue of $8.41 billion — missing analysts’ estimates.

    Jesse Cohen, a senior analyst at Investing.com, attributed Alphabet’s after-hours stock fall to the “relatively weak performance in its Google cloud platform, which is at risk of falling further behind [Microsoft’s] Azure and [Amazon’s] AWS.” Still, despite taking a hit in 2022 amid a broader tech sector downturn, shares for Alphabet have climbed roughly 56% since the start of 2023, beating the tech-heavy Nasdaq index.

    Google’s report comes as the tech giant is in the antitrust hot seat. US prosecutors officially opened a landmark antitrust trial against Google last month with sweeping allegations that the company engaged in anticompetitive behavior to maintain its dominance over search. (As the legal showdown rages on, Google has continued to deny allegations that it operated illegally.)

    Google also confirmed last month plans to lay off hundreds of staffers in its recruiting division, as it continues cost cutting efforts in some areas. These more targeted layoffs came after Alphabet in January cut around 12,000 jobs — about 6% of its workforce.

    Still, Google has signaled that it remains committed to investing heavily in generative artificial intelligence technology. Last month, Google rolled out a major expansion of its Bard AI chatbot tool.

    “As we expand access to our new AI services, we continue to make meaningful investments in support of our AI efforts,” Pichai said on the call. “We remain committed to durably re-engineering our cost base in order to help create capacity for these investments, in support of long-term sustainable financial value.”

    Microsoft’s recent investments in AI technology helped boost its sales in the September quarter, especially in its key cloud division. Sales from Microsoft’s “intelligent cloud” business — its biggest revenue driver — grew 19% from the year-ago quarter to $24.3 billion.

    Revenue from the company’s “productivity and business processes” business, which includes LinkedIn and Office commercial and consumer products, also grew 13% year-over-year to $18.6 billion.

    “Microsoft is firing on all cylinders and AI is clearly driving growth,” Cohen said in a research note following the company’s report. “The results indicated that artificial intelligence products are stimulating sales and already contributing to top and bottom-line growth.”

    But economic jitters among consumers appear to still have some impact on the company’s bottom line. Devices revenue, which includes sales of laptops, tablets and Xbox consoles, decreased 22% year-over-year, despite a 3% sales increase in the overall “more personal computing” segment. Ongoing concerns about a potential economic slowdown could continue to weigh on the company as it heads into the crucial holiday device sales season.

    The report is Microsoft’s first since the company closed its $69 billion acquisition of “Call of Duty” maker Activision Blizzard earlier this month. While the deal didn’t factor into this quarter’s results, it’s expected to supercharge the company’s gaming business.

    “Microsoft now controls 30 game studios and some of the most well-known games across the industry,” Edward Jones analyst Logan Purk said in a research note earlier this month. “With a massive cloud network and now a compelling library of games, Microsoft has a leg up on peers” in gaming, he said.

    Following the Activision takeover, “we’re looking forward to one of our strongest first-party holiday [game] lineups ever, including new titles like Call of Duty Modern Warfare 3,” CEO Satya Nadella said on an analyst call Tuesday. The company said it expects roughly $400 million of operating expenses in the fourth quarter to come as a result of the acquisition.

    Snap said its sales growth was driven in part by its ongoing efforts to revamp its advertising technology, following changes to Apple’s app tracking policies that took a hit to the business models of Snapchat, Facebook and other platforms.

    “We are focused on improving our advertising platform to drive higher return on investment for our advertising partners, and we have evolved our go-to-market efforts to better serve our partners and drive customer success,” CEO Evan Spiegel said in a statement.

    Snap also reported that it now has 406 million daily active users, up 12% compared to the year-ago quarter. And time spent watching Spotlight — Snapchat’s TikTok clone — grew 200% year-over-year, according to the company.

    The company also recently announced that it had reached more than 5 million subscribers to its Snapchat+ subscription program, a key effort to diversify its revenue.

    Snap said Tuesday that its chief operating officer, Jerry Hunter, plans to retire. Hunter, who spent seven years at the company, will step down from his role as of the end of the month, but will remain at the company until July 1, 2024, to support the transition.

    The company noted that some advertisers temporarily paused their spending following the outbreak of the Israel-Hamas war. Because of the “unpredictable nature” of the war, Snap declined to provide formal guidance for the fourth quarter, but said its internal forecast assumes year-over-year quarterly revenue growth between 2% and 6%.

    [ad_2]

    Source link

  • New York City unveils an ‘artificial intelligence action plan’ | CNN Business

    New York City unveils an ‘artificial intelligence action plan’ | CNN Business

    [ad_1]



    CNN
     — 

    The same New York City administration to launch a “Rat Action Plan” is back with an “Artificial Intelligence Action Plan.”

    Mayor Eric Adams on Monday unveiled a citywide AI “action plan” that pledged – in broad-brushstrokes – to evaluate AI tools and associated risks, boost AI skills among city employees and support “the responsible implementation of these technologies to improve quality of life for New Yorkers,” according to a statement from the mayor’s office.

    The city’s 51-page AI action plan establishes a series of steps the city will take in the coming years to help better understand and responsibly implement the technology that has taken the tech sector and broader business world by storm in recent months.

    While government use of automated technologies has often courted controversy, New York City’s approach to AI, so far, seems to be focused on laying a framework for future AI use-cases as well as engaging with outside experts and the public.

    The first step listed in the city’s AI action plan is establishing an “AI Steering Committee” of city agency stakeholders. The document goes on to list nearly 40 “actions,” with 29 of those set to be started or completed within the next year. The city said it will publish an annual AI progress report to communicate the city’s updates and implementation of the plan.

    Also on Monday, city officials said the government was piloting the first citywide AI-powered chatbot to help business owners navigate operating and growing businesses in New York City. The AI chatbot, already available in beta on the official city of New York website, was trained on information from more than 2,000 NYC Business web pages.

    The chatbot uses Microsoft’s Azure AI services, per a disclaimer on the tool.

    In a statement announcing the AI action plan, Mayor Adams acknowledged “the potential pitfalls and associated risks these technologies present,” and pledged to be “clear-eyed” about these.

    The mayor also expressed hope that the action plan will “strike a critical balance in the global AI conversation — one that will empower city agencies to deploy technologies that can improve lives while protecting against those that can do harm.”

    [ad_2]

    Source link

  • Hurricane Idalia and Labor Day could send gas prices and inflation higher | CNN Business

    Hurricane Idalia and Labor Day could send gas prices and inflation higher | CNN Business

    [ad_1]

    A version of this story first appeared in CNN Business’ Before the Bell newsletter. Not a subscriber? You can sign up right here. You can listen to an audio version of the newsletter by clicking the same link.


    New York
    CNN
     — 

    Labor Day — one of the busiest driving holidays in the US — is on the horizon, and so is Hurricane Idalia. That’s potentially bad news for gas prices.

    The storm, which is expected to make landfall in Florida as a Category 3 hurricane on Wednesday, could bring 100 mile-per-hour winds and flooding that extends hundreds of miles up the east coast. The impact could take gasoline refinery facilities offline and may limit some Gulf oil production and supplies. Plus, demand for gas is expected to surge as residents of the impacted areas evacuate.

    “Idalia… could pose risk to oil and gas output in the US Gulf,” wrote the Nasdaq Advisory Services Energy Team.

    The storm is expected to make landfall as drivers nationwide load into their vehicles for the Labor Day weekend, pushing up the demand for gasoline even further.

    All together it means the price of oil and gasoline could remain elevated well into the fall.

    Generally, summer demand for oil tends to wane in September, but so does supply as refineries shift from summer fuels to “oxygenated” winter fuels, said Louis Navellier of Navellier and Associates. Since the 1990s, the US has required manufacturers to include more oxygen in their gasoline during the colder months to prevent excessive carbon monoxide emissions.

    With the storm approaching, that trend may not play out.

    What’s happening: Gas prices are already at $3.82 a gallon. That’s the second highest price for this time of year since at least 2004, according to Bespoke Investment Group. (The only time the national average has been higher for this period was last summer, when prices hit $3.85 a gallon).

    Geopolitical tensions have been supporting high oil and gas prices for some time. Recently, increased crude oil imports into China, production cuts by Russia and Saudi Arabia and extreme heat set off a late-summer spike in gas prices. And the threat of powerful hurricanes could send them even higher.

    Analysts at Citigroup have warned that this hurricane season could seriously impact power supplies.

    “Two Category 3 or higher hurricanes landing on US shores could massively disrupt supplies for not weeks but months,” Citigroup analysts wrote in a note last week. In 2005, for example, gas prices surged by 46% between Memorial Day and Labor Day because of the landfall of Hurricane Katrina, according to Bespoke.

    What it means: The Federal Reserve and central banks around the world have been fighting to bring down stubbornly high inflation for more than a year. This week we’ll get some highly awaited economic data: The Fed’s preferred inflation gauge, the Personal Consumption Expenditures index, is due out on Thursday. But the task of inflation-busting is a lot more difficult when energy prices are high, and it’s even harder when they’re on the rise.

    The PCE price index uses a complicated formula to determine how much weight to give to energy prices each month, but they typically comprise a significant chunk of the headline inflation rate.

    “Crude oil price remains elevated, even after the surge at the start of the Russia-Ukraine War,” said Andrew Woods, oil analyst at Mintec, a market intelligence firm. “Energy prices have been a major contributor to persistently high inflation in the US, so the crude oil price will remain a watch-out factor for future inflation.”

    High oil and gas prices are one of the largest contributing factors to inflation. That’s bad news for drivers but tends to be great for the energy industry, as oil prices and energy stocks are closely interlinked.

    Energy stocks were trading higher on Monday. The S&P 500 energy sector was up around 0.75%. Exxon Mobil (XOM) was 0.85% higher, BP (BP) was up 1.36% and Chevron (CVX) was up 0.75%.

    OpenAI, will release a version of its popular ChatGPT tool made specifically for businesses, the company announced on Monday.

    OpenAI unveiled the new service, dubbed “ChatGPT Enterprise,” in a company blog post and said it will be available to business clients for purchase immediately.

    The new offering, reports my colleague Catherine Thorbecke, promises to provide “enterprise-grade security and privacy” combined with “the most powerful version of ChatGPT yet” for businesses looking to jump on the generative AI bandwagon.

    “We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” the blog post said. “Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.”

    Fintech startup Block, cosmetics giant Estee Lauder and professional services firm PwC have already signed on as customers.

    The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

    A multitude of leading newsrooms, meanwhile, have recently injected code into their websites that blocks OpenAI’s web crawler, GPTBot, from scanning their platforms for content. CNN’s Reliable Sources has found that CNN, The New York Times, Reuters, Disney, Bloomberg, The Washington Post, The Atlantic, Axios, Insider, ABC News, ESPN, and the Gothamist, among others have taken the step to shield themselves.

    American Airlines just got smacked with the largest-ever fine for keeping passengers waiting on the tarmac during multi-hour delays.

    The Department of Transportation is levying the $4.1 million fine, “the largest civil penalty that the Department has ever assessed” it said in a statement, for lengthy tarmac delays of 43 flights that impacted more than 5,800 passengers. The flights occurred between 2018 and 2021, reports CNN’s Gregory Wallace.

    In the longest of the delays, passengers sat aboard a plane in Texas in August 2020 for six hours and three minutes. The 105-passenger flight had landed after being diverted from the Dallas-Fort Worth International Airport due to severe weather, with the DOT alleging that “American (AAL) lacked sufficient resources to appropriately handle several of these flights once they landed.”

    Federal rules set the maximum time that passengers can be held without the opportunity to get off prior to takeoff or after landing, at three hours for domestic flights and four hours for international flights. Current rules also require airlines provide passengers water and a snack.

    American told CNN the delays all resulted from “exceptional weather events” and “represent a very small number of the 7.7 million flights during this time period.”

    The company also said it has invested in technology to better handle flights in severe weather and reduce the congestion at airports.

    [ad_2]

    Source link

  • AI fears overblown? Theoretical physicist calls chatbots ‘glorified tape recorders’ | CNN Business

    AI fears overblown? Theoretical physicist calls chatbots ‘glorified tape recorders’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The public’s anxiety over new AI technology is misguided, according to theoretical physicist Michio Kaku.

    In an interview with CNN’s Fareed Zakaria on Sunday, the futurologist said chatbots like OpenAI’s ChatGPT will benefit society and increase productivity. But fear has driven people to largely focus on the negative implications of the programs, which he terms “glorified tape recorders.”

    “It takes snippets of what’s on the web created by a human, splices them together and passes it off as if it created these things,” he said. “And people are saying, Oh my God, it’s a human, it’s humanlike.’”

    However, he said, chatbots cannot discern true from false: “That has to be put in by a human.”

    According to Kaku, humanity is in its second stage of computer evolution. The first was the analog stage, “when we computed with sticks, stones, levers, gears, pulleys, string.”

    After that, around World War II, he said, we switched to electricity-powered transistors. It made the development of the microchip possible and helped shape today’s digital landscape.

    But this digital landscape rests on the idea of two states like “on” and “off,” and uses binary notation composed of zeros and ones.

    “Mother Nature would laugh at us because Mother Nature does not use zeros and ones,” Kaku said. “Mother Nature computes on electrons, electron waves, waves that create molecules. And that’s why we’re now entering stage three.”

    He believes the next technological stage will be in the quantum realm.

    Quantum computing is an emerging technology utilizing the various states of particles like electrons to vastly increase a computer’s processing power. Instead of using computer chips with two states, quantum computers use various states of vibrating waves. It makes them capable of analyzing and solving problems much faster than normal computers.

    Several tech giants – IBM

    (IBM)
    , Microsoft

    (MSFT)
    , Google

    (GOOG)
    and Amazon

    (AMZN)
    , among others – are developing their own quantum computers, and have granted access to a number of companies to use their technology through the cloud. The computers could help businesses with risk analysis, supply chain logistics, and machine learning.

    But beyond business applications, Kaku said quantum computing could also help advance health care. “Cancer, Parkinson’s, Alzheimer’s disease – these are diseases at the molecular level. We’re powerless to cure these diseases because we have to learn the language of nature, which is the language of molecules and quantum electrons.”

    [ad_2]

    Source link

  • An author says AI is ‘writing’ unauthorized books being sold under her name on Amazon | CNN Business

    An author says AI is ‘writing’ unauthorized books being sold under her name on Amazon | CNN Business

    [ad_1]


    New York
    CNN
     — 

    An author is raising alarms this week after she found new books being sold on Amazon under her name — only she didn’t write them; they appear to have been generated by artificial intelligence.

    Jane Friedman, who has authored multiple books and consulted about working in the writing and publishing industry, told CNN that an eagle-eyed reader looking for more of her work bought one of the fake titles on Amazon. The books had titles similar to the subjects she typically writes about, but the text read as if someone had used a generative AI model to imitate her style.

    “When I started looking at these books, looking at the opening pages, looking at the bio, it was just obvious to me that it had been mostly, if not entirely, AI-generated … I have so much content available online for free, because I’ve been blogging forever, so it wouldn’t be hard to get an AI to mimic me” Friedman said.

    With AI tools like ChatGPT now able to rapidly and cheaply pump out huge volumes of convincing text, some writers and authors have raised alarms about losing work to the new technology. Others have said they don’t want their work being used to train AI models, which could then be used to imitate them.

    “Generative AI is being used to replace writers — taking their work without permission, incorporating those works into the fabric of those AI models and then offering those AI models to the public, to other companies, to use to replace writers,” Mary Rasenberger, CEO of the nonprofit authors advocacy group the Authors Guild, told CNN. “So you can imagine writers are a little upset about that.”

    Last month, US lawmakers met with members of creative industries, including the Authors Guild, to discuss the implications of artificial intelligence. In a Senate subcommittee hearing, Rasenberger called for the creation of legislation to protect writers from AI, including rules that would require AI companies to be transparent about how they train their models. More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.

    Friedman on Monday posted a well-read thread on X, formerly known as Twitter, and a blog post about the issue. Several authors responded saying they’d had similar experiences.

    “People keep telling me they bought my newest book — that has my name on it but I didn’t write,” one author said in response.

    Amazon removed the fake books being sold under Friedman’s name and said its policies prohibit such imitation.

    “We have clear content guidelines governing which books can be listed for sale and promptly investigate any book when a concern is raised,” Amazon spokesperson Ashley Vanicek said in a statement, adding that the company accepts author feedback about potential issues. “We invest heavily to provide a trustworthy shopping experience and protect customers and authors from misuse of our service.”

    Amazon also told Friedman that it is “investigating what happened with the handling of your claims to drive improvements to our processes,” according to an email viewed by CNN.

    The fake books using Friedman’s name were also added to her profile on the literary social network Goodreads, and removed only after she publicized the issue.

    “We have clear guidelines on which books are included on Goodreads and will quickly investigate when a concern is raised, removing books when we need to,” Goodreads spokesperson Suzanne Skyvara said in a statement to CNN.

    Friedman said she worries that authors will be stuck playing whack-a-mole to identify AI generated fakes.

    “What’s frightening is that this can happen to anyone with a name that has reputation, status, demand that someone sees a way to profit off of,” she said.

    The Authors Guild has been working with Amazon since this past winter to address the issue of books written by AI, Rasenberger said.

    She said the company has been responsive when the Authors Guild flags fake books on behalf of authors, but it can be a tricky issue to spot given that it’s possible for two legitimate authors to have the same name.

    The group is also hoping AI companies will agree to allow authors to opt out of having their work used to train AI models — so it’s harder to create copycats — and to find ways to transparently label artificially generated text. And, she said, companies and publishers should continue investing in creative work made by humans, even if AI appears more convenient.

    “Using AI to generate content is so easy, it’s so cheap, that I do worry there’s going to be this kind of downward competition to use AI to replace human creators,” she said. “And you will never get the same quality with AI as human creators.”

    [ad_2]

    Source link

  • Four takeaways from Walter Isaacson’s biography of Elon Musk | CNN Business

    Four takeaways from Walter Isaacson’s biography of Elon Musk | CNN Business

    [ad_1]



    CNN
     — 

    “You’ll never be successful,” Errol Musk in 1989 told his 17-year-old son Elon, who was then preparing to fly from South Africa to Canada to find relatives and a college education.

    That’s one of the scenes Walter Isaacson paints in his 670-page biography of Elon Musk, who is now the richest person who ever lived. The biography allows readers new glimpses into the private life of the entrepreneur who popularized electric vehicles for the masses and landed rocket boosters hurtling back to Earth so they could be reused.

    But Musk’s public statements and actions have become increasingly unhinged, filing and threatening lawsuits against nonprofits that fight hate speech and allowing some of the internet’s worst actors to regain their platforms.

    Isaacson portrays Musk as a restless genius with a turbulent upbringing on the cusp of launching a new AI company along with his five other companies.

    Musk allowed Isaacson to shadow him for two years but exercised no control over the biography’s contents, the author said.

    Here are four key takeaways.

    Musk’s upbringing and father haunt him

    Isaacson’s book attributes much of Musk’s drive to his upbringing. He recounts the emotional scars inflicted on Musk by his father, which, Isaacson writes, caused Musk to become “a tough yet vulnerable man-child with an exceedingly high tolerance for risk, a craving for drama, an epic sense of mission and a maniacal intensity that was callous and at times destructive.”

    Musk decided to live with his father from age 10 to 17, enduring what Musk and others describe as occasional but regular verbal taunts and abuse. Musk’s sister, Tosca, said Errol would sometimes lecture his children for hours, “calling you worthless, pathetic, making scarring and evil comments, not allowing you to leave.”

    Elon Musk became estranged from his father, though he has occasionally supported his father financially. In a 2022 email sent to Elon Musk on Father’s Day, Errol Musk said he was freezing and lacking electricity, asking his son for money.

    In the letter, Errol made racist comments about Black leaders in South Africa. “With no Whites here, the Blacks will go back to the trees,” he wrote.

    Elon Musk has said that he opposes racism and discrimination, but hate speech has flourished on X, formerly known as Twitter, since he purchased it 11 months ago, according to the Anti-Defamation League. Musk threatened to sue the ADL for defamation last week, arguing that the nonprofit’s statements have caused his company to lose significant advertising revenue.

    Isaacson reported that Errol, in other emails, denounced Covid as “a lie” and attacked Dr. Anthony Fauci, the United States’ former top infectious disease expert who played a prominent role in the government’s fight against the pandemic.

    Elon Musk, similarly, has criticized Fauci and raised many questions about public health policy during the pandemic. But he has said he supports vaccination, even if he doesn’t believe the shots should be mandated.

    Musk’s fluid family and obsession with population

    Musk has a fluid mix of girlfriends, ex-wives, ex-girlfriends and significant others, and he has many children with multiple women. Isaacson’s book revealed Musk had a third child (Techno Mechanicus) with the musician Grimes in 2022, and Musk confirmed the revelation Sunday.

    Musk has frequently stated that humans must be a multiplanetary species, warning space exploration will ensure the future of humanity. He similarly has spoken numerous times that people need to have more children.

    “Population collapse due to low birth rates is a much bigger risk to civilization than global warming,” Musk said last year.

    Musk has referred to his desire to increase the global population as an explanation for his unique family situation.

    The book reports that Musk encouraged employees such as Shivon Zilis, a top operations officer at his Neuralink company, to have many children. “He feared that declining birthrates were a threat to the long-term survival of human consciousness,” Isaacson writes.

    Although the book presents their relationship as a platonic work friendship, Musk volunteered to donate sperm to Zilis. She agreed and had twins in 2021 via in vitro fertilization; she did not tell people who the biological father was.

    Zilis and Grimes were friendly, but Musk did not tell Grimes about the twins, according to the book.

    Musk asked Zilis if her twins might like to take his last name. Isaacson reports that Grimes was upset in 2022 when she learned the news that Musk had fathered children with Zilis.

    “Doing my best to help the underpopulation crisis,” Musk tweeted at the time, trying to defuse the tension. “A collapsing birth rate is the biggest danger civilization faces by far.”

    One of Musk’s children, Jenna, often criticized her father’s wealth specifically and capitalism broadly. In 2022, she disowned her father, which Isaacson reports saddened Musk.

    Isaacson reports that Musk’s fractured relationship with Jenna, who is trans, partly led to Musk’s rightward turn toward libertarianism and questioning what he considers the “woke-mind-virus, which is fundamentally antiscience, antimerit, and antihuman.”

    Musk has called into question the use of alternate gender pronouns and made numerous statements some critics consider to be anti-trans.

    “I absolutely support trans, but all these pronouns are an esthetic nightmare,” Musk posted in 2020.

    But in December 2020 he also posted a tweet, since deleted, that said “when you put he/him in your bio” alongside a drawing of an 18th century soldier rubbing blood on his face in front of a pile of dead bodies and wearing a cap that read “I love to oppress.”

    Late last year, he tweeted: “My pronouns are Prosecute/Fauci.”

    The purchase of his favorite social media platform, gutting the staff and tinkering with policies and branding have taken time and resources away from Musk’s other companies and projects, Isaacson reports.

    “I’ve got a bad habit of biting off more than I can chew,” Musk told Isaacson at one point.

    After a protracted legal battle over his decision to purchase Twitter, Musk said he regained his enthusiasm for taking over the company when he realized that he wanted to prevent a world where people silo off into their own echo chambers and would prefer a world of civil discourse.

    But Isaacson notes “he would end up undermining that important mission with statements and tweets that ended up chasing off progressives and mainstream media types to other social networks.”

    Musk team members, such as his business manager Jared Birchall, his lawyer Alex Spiro and his brother Kimbal, sometimes try to restrain Musk from sending text messages or tweets that could create legal or economic peril, according to the book. Some friends convinced him to place his phone in a hotel safe overnight on one occasion, before Musk summoned hotel security to open the safe for him.

    During Christmas in 2022 with his brother, Kimbal warned Elon about how fast he was making enemies. “It’s like the days of high school, when you kept getting beaten up,” he said. Kimbal stopped following Elon on Twitter after his brother’s tweets about Fauci and other conspiracies. “Stop falling for weird s—.”

    Are robocars, an AI company and a robot called Optimus on tap?

    Musk continues moving forward on new engineering projects. Since 2021, Musk has been working on a “humanoid” robot called Optimus that walks on two legs instead of like four-legged robots coming from other labs. He unveiled an early version of the Optimus robot in September of 2022. Musk told engineers that humanoid robots will “uncork the economy to quasi-infinite levels,” according to Isaacson, by doing jobs humans find dangerous or repetitive.

    Some of Musk’s top engineers are also working on a “robotaxi,” a driverless vehicle that shows up like an Uber. This past summer, he spent hours each week preparing new factory designs in Texas to produce the next-generation Tesla cars that would look similar to Tesla’s cybertruck.

    Musk is also starting his own AI company called X.AI, which he told Isaacson will compete with Google, Microsoft and other companies surging ahead in the past year with public AI projects. Musk had co-founded OpenAi with Sam Altman in 2015 and contributed $100 million to the non-profit. He became angry when Altman converted the project into a for-profit. Musk also ended a friendship with Larry Page when the two disagreed on AI. According to the book, Musk believes he has a better vision for AI and humanity and thinks the data he owns from Tesla and Twitter will be an asset to his next AI plans.

    “Could you get the rockets to orbit or the transition to electric vehicles without accepting all aspects of him, hinged and unhinged?” Isaacson asks in the last chapter.

    [ad_2]

    Source link

  • Microsoft Outlook will soon write emails for you | CNN Business

    Microsoft Outlook will soon write emails for you | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Artificial intelligence could soon be writing more company emails in Microsoft Outlook, as the company expands its rollout of AI tools for corporate users.

    The Microsoft 365 Copilot tool – “your everyday AI companion,” as the company bills it – will help users write their emails to “keep your sentences concise and error-free.” The tool also summarizes long email threads to quickly draft suggested replies.

    Users with Microsoft 365 Personal or Family subscriptions will get more advanced AI help through Microsoft Editor, an intelligent writing assistant. The update will include suggested edits for “clarity, conciseness, inclusive language and more” to help workers create more “polished and professional” emails, according to a blog post from the company in September.

    The company said the tool will be available to more corporate clients starting on November 1. It has already been in months-long testing with customers including Visa, General Motors, KPMG and Lumen Technologies.

    In March, Microsoft outlined its plans to bring artificial intelligence to its most recognizable productivity tools, including Outlook, PowerPoint, Excel and Word, with the promise of changing how millions do their work every day. The addition of its AI-powered “copilot” – which will help edit, summarize, create and compare documents – is built on the same technology that underpins ChatGPT.

    In addition to writing emails, Microsoft 365 users will be able to summarize meetings and create suggested follow-up action items, request to create a specific chart in Excel, and turn a Word document into a PowerPoint presentation in seconds.

    Corporate customers will also get to use Microsoft 365 Chat, previously called Business Chat, which can scan the internet and employee emails, meetings, chats and files, to behave as a sort of personalized secretary.

    The expansion will come less than a year after OpenAI publicly released viral AI chat tool ChatGPT, which stunned many users with its impressive ability to generate original essays, stories and song lyrics in response to user prompts. The initial wave of attention on the tool helped renew an arms race among tech companies to develop and deploy similar AI tools in their products.

    In the months since, many other companies have rolled out features underpinning or similar to the technology. Microsoft rival Google, for example, has also brought AI to its productivity tools, including Gmail, Sheets and Docs.

    [ad_2]

    Source link

  • ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business

    ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business

    [ad_1]



    CNN
     — 

    For months, Eveline Fröhlich, a visual artist based in Stuttgart, Germany, has been feeling “helpless” as she watched the rise of new artificial intelligence tools that threaten to put human artists out of work.

    Adding insult to injury is the fact that many of these AI models have been trained off of the work of human artists by quietly scraping images of their artwork from the internet without consent or compensation.

    “It all felt very doom and gloomy for me,” said Fröhlich, who makes a living selling prints and illustrating book and album covers.

    “We’ve never been asked if we’re okay with our pictures being used, ever,” she added. “It was just like, ‘This is mine now, it’s on the internet, I’m going to get to use it.’ Which is ridiculous.”

    Recently, however, she learned about a tool dubbed Glaze that was developed by computer scientists at the University of Chicago and thwarts the attempts of AI models to perceive a work of art via pixel-level tweaks that are largely imperceptible to the human eye.

    “It gave us some way to fight back,” Fröhlich told CNN of Glaze’s public release. “Up until that point, many of us felt so helpless with this situation, because there wasn’t really a good way to keep ourselves safe from it, so that was really the first thing that made me personally aware that: Yes, there is a point in pushing back.”

    Fröhlich is one of a growing number of artists that is fighting back against AI’s overreach and trying to find ways to protect her images online as a new spate of tools has made it easier than ever for people to manipulate images in ways that can sow chaos or upend the livelihoods of artists.

    These powerful new tools allow users to create convincing images in just seconds by inputting simple prompts and letting generative AI do the rest. A user, for example, can ask an AI tool to create a photo of the Pope dripped out in a Balenciaga jacket — and go on to fool the internet before the truth comes out that the image is fake. Generative AI technology has also wowed users with its ability to spit out works of art in the style of a specific artist. You can, for example, create a portrait of your cat that looks like it was done with the bold brushstrokes of Vincent Van Gogh.

    But these tools also make it very easy for bad actors to steal images from your social media accounts and turn them into something they’re not (in the worst cases, this could manifest as deepfake porn that uses your likeness without your consent). And for visual artists, these tools threaten to put them out of work as AI models learn how to mimic their unique styles and generate works of art without them.

    Some researchers, however, are now fighting back and developing new ways to protect people’s photos and images from AI’s grasp.

    Ben Zhao, a professor of computer science at University of Chicago and one of the lead researchers on the Glaze project, told CNN that the tool aims to protect artists from having their unique works used to train AI models.

    Glaze uses machine-learning algorithms to essentially put an invisible cloak on artworks that will thwart AI models’ attempts to understand the images. For example, an artist can upload an image of their own oil painting that has been run through Glaze. AI models might read that painting as something like a charcoal drawing — even if humans can clearly tell that it is an oil painting.

    Artists can now take a digital image of their artwork, run it through Glaze, “and afterwards be confident that this piece of artwork will now look dramatically different to an AI model than it does to a human,” Zhao told CNN.

    Zhao’s team released the first prototype of Glaze in March and has already surpassed a million downloads of the tool, he told CNN. Just last week, his team released a free online version of the tool as well.

    Jon Lam, an artist based in California, told CNN that he now uses Glaze for all of the images of his artwork that he shares online.

    Lam said that artists like himself have for years posted the highest resolution of their works on the internet as a point of pride. “We want everyone to see how awesome it is and see all the details,” he said. But they had no idea that their works could be gobbled up by AI models that then copy their styles and put them out of work.

    Jon Lam is a visual artist from California who uses the Glaze tool to help protect his artwork online from being used to train AI models.

    “We know that people are taking our high-resolution work and they are feeding it into machines that are competing in the same space that we are working in,” he told CNN. “So now we have to be a little bit more cautious and start thinking about ways to protect ourselves.”

    While Glaze can help ameliorate some of the issues artists are facing for now, Lam says it’s not enough and there needs to be regulation set regarding how tech companies can take data from the internet for AI training.

    “Right now, we’re seeing artists kind of being the canary in the coal mine,” Lam said. “But it’s really going to affect every industry.”

    And Zhao, the computer scientist, agrees.

    Since releasing Glaze, the amount of outreach his team has received from artists in other disciplines has been “overwhelming,” he said. Voice actors, fiction writers, musicians, journalists and beyond have all reached out to his team, Zhao said, inquiring about a version of Glaze for their field.

    “Entire, multiple, human creative industries are under threat to be replaced by automated machines,” he said.

    While the rise of AI images are threatening the jobs of artists around the world, everyday internet users are also at risk of their photos being manipulated by AI in other ways.

    “We are in the era of deepfakes,” Hadi Salman, a researcher at the Massachusetts Institute of Technology, told CNN amid the proliferation of AI tools. “Anyone can now manipulate images and videos to make people actually do something that they are not doing.”

    Salman and his team at MIT released a research paper last week that unveiled another tool aimed at protecting images from AI. The prototype, dubbed PhotoGuard, puts an invisible “immunization” over images that stops AI models from being able to manipulate the picture.

    The aim of PhotoGuard is to protect photos that people upload online from “malicious manipulation by AI models,” Salman said.

    Salman explained that PhotoGuard works by adjusting an image’s pixels in a way that is imperceptible to humans.

    In this demonstration released by MIT, a researcher shows a selfie (left) he took with comedian Trevor Noah. The middle photo, an AI-generated fake image, shows how the image looks after he used an AI model to generate a realistic edit of the pair wearing suits. The right image depicts how the researchers' tool, PhotoGuard, would prevent an attempt by AI models from editing the photo.

    “But this imperceptible change is strong enough and it’s carefully crafted such that it actually breaks any attempts to manipulate this image by these AI models,” he added.

    This means that if someone tries to edit the photo with AI models after it’s been immunized by PhotoGuard, the results will be “not realistic at all,” according to Salman.

    In an example he shared with CNN, Salman showed a selfie he took with comedian Trevor Noah. Using an AI tool, Salman was able to edit the photo to convincingly make it look like he and Noah were actually wearing suits and ties in the picture. But when he tries to make the same edits to a photo that has been immunized by PhotoGuard, the resulting image depicts Salman and Noah’s floating heads on an array of gray pixels.

    PhotoGuard is still a prototype, Salman notes, and there are ways people can try to work around the immunization via various tricks. But he said he hopes that with more engineering efforts, the prototype can be turned into a larger product that can be used to protect images.

    While generative AI tools “allow us to do amazing stuff, it comes with huge risks,” Salman said. It’s good people are becoming more aware of these risks, he added, but it’s also important to take action to address them.

    Not doing anything, “Might actually lead to much more serious things than we imagine right now,” he said.

    [ad_2]

    Source link

  • Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Coming out of a three-hour Senate hearing on artificial intelligence, Elon Musk, the head of a handful of tech companies, summarized the grave risks of AI.

    “There’s some chance – above zero – that AI will kill us all. I think it’s low but there’s some chance,” Musk told reporters. “The consequences of getting AI wrong are severe.”

    But he also said the meeting “may go down in history as being very important for the future of civilization.”

    The session organized by Senate Majority Leader Chuck Schumer brought high-profile tech CEOs, civil society leaders and more than 60 senators together. The first of nine sessions aims to develop consensus as the Senate prepares to draft legislation to regulate the fast-moving artificial intelligence industry. The group included CEOs of Meta, Google, OpenAI, Nvidia and IBM.

    All the attendees raised their hands — indicating “yes” — when asked whether the federal government should oversee AI, Schumer told reporters Wednesday afternoon. But consensus on what that role should be and specifics on legislation remained elusive, according to attendees. 

    Benefits and risks

    Bill Gates spoke of AI’s potential to feed the hungry and one unnamed attendee called for spending tens of billions on “transformational innovation” that could unlock AI’s benefits, Schumer said.

    The challenge for Congress is to promote those benefits while mitigating the societal risks of AI, which include the potential for technology-based discrimination, threats to national security and even, as X owner Musk said, “civilizational risk.”

    “You want to be able to maximize the benefits and minimize the harm,” said Schumer, who organized the first of nine sessions. “And that will be our difficult job.”

    Senators emerging from the meeting said they heard a broad range of perspectives, with representatives from labor unions raising the issue of job displacement and civil rights leaders highlighting the need for an inclusive legislative process that provides the least powerful in society a voice.

    Most agreed that AI could not be left to its own devices, said Washington Democratic Sen. Maria Cantwell.

    “I thought Satya Nadella from Microsoft said it best: ‘When it comes to AI, we shouldn’t be thinking about autopilot. You need to have copilots.’ So who’s going to be watching this activity and making sure that it’s done correctly?”

    Other areas of agreement reflected traditional tech industry priorities, such as increasing federal investment in research and development as well as promoting skilled immigration and education, Cantwell added.

    But there was a noticeable lack of engagement on some of the harder questions, she said, particularly on whether a new federal agency is needed to regulate AI.

    “There was no discussion of that,” she said, though several in the meeting raised the possibility of assigning some greater oversight responsibilities to the National Institute of Standards and Technology, a Commerce Department agency.

    Musk told journalists after the event that he thinks a standalone agency to regulate AI is likely at some point.

    “With AI we can’t be like ostriches sticking our heads in the sand,” Schumer said, according to prepared remarks acquired by CNN. He also noted this is “a conversation never before seen in Congress.”

    The push reflects policymakers’ growing awareness of how artificial intelligence, and particularly the type of generative AI popularized by tools such as ChatGPT, could potentially disrupt business and everyday life in numerous ways — ranging from increasing commercial productivity to threatening jobs, national security and intellectual property.

    The high-profile guests trickled in shortly before 10 a.m., with Meta CEO Mark Zuckerberg pausing to chat with Nvidia CEO Jensen Huang outside the Senate Russell office building’s Kennedy Caucus Room. Google CEO Sundar Pichai was seen huddling with Delaware Democratic Sen. Chris Coons, while X owner Musk quickly swept by a mass of cameras with a quick wave to the crowd. Inside, Musk was seated at the opposite end of the room from Zuckerberg, in what is likely the first time that the two men have shared a room since they began challenging each other to a cage fight months ago.

    Elon Musk, CEO of X, the company formerly known as Twitter, left, and Alex Karp, CEO of the software firm Palantir Technologies, take their seats as Senate Majority Leader Chuck Schumer, D, N.Y., convenes a closed-door gathering of leading tech CEOs to discuss the priorities and risks surrounding artificial intelligence and how it should be regulated, at the Capitol in Washington, Wednesday, Sept. 13, 2023.

    The session at the US Capitol in Washington also gave the tech industry its most significant opportunity yet to influence how lawmakers design the rules that could govern AI.

    Some companies, including Google, IBM, Microsoft and OpenAI, have already offered their own in-depth proposals in white papers and blog posts that describe layers of oversight, testing and transparency.

    IBM’s CEO, Arvind Krishna, argued in the meeting that US policy should regulate risky uses of AI, as opposed to just the algorithms themselves.

    “Regulation must account for the context in which AI is deployed,” he said, according to his prepared remarks.

    Executives such as OpenAI CEO Sam Altman previously wowed some senators by publicly calling for new rules early in the industry’s lifecycle, which some lawmakers see as a welcome contrast to the social media industry that has resisted regulation.

    Clement Delangue, co-founder and CEO of the AI company Hugging Face, tweeted last month that Schumer’s guest list “might not be the most representative and inclusive,” but that he would try “to share insights from a broad range of community members, especially on topics of openness, transparency, inclusiveness and distribution of power.”

    Civil society groups have voiced concerns about AI’s possible dangers, such as the risk that poorly trained algorithms may inadvertently discriminate against minorities, or that they could ingest the copyrighted works of writers and artists without compensation or permission. Some authors have sued OpenAI over those claims, while others have asked in an open letter to be paid by AI companies.

    News publishers such as CNN, The New York Times and Disney are some of the content producers who have blocked ChatGPT from using their content. (OpenAI has said exemptions such as fair use apply to its training of large language models.)

    “We will push hard to make sure it’s a truly democratic process with full voice and transparency and accountability and balance,” said Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights, “and that we get to something that actually supports democracy; supports economic mobility; supports education; and innovates in all the best ways and ensures that this protects consumers and people at the front end — and just not try to fix it after they’ve been harmed.”

    The concerns reflect what Wiley described as “a fundamental disagreement” with tech companies over how social media platforms handle misinformation, disinformation and speech that is either hateful or incites violence.

    American Federation of Teachers President Randi Weingarten said America can’t make the same mistake with AI that it did with social media. “We failed to act after social media’s damaging impact on kids’ mental health became clear,” she said in a statement. “AI needs to supplement, not supplant, educators, and special care must be taken to prevent harm to students.”

    Navigating those diverse interests will be Schumer, who along with three other senators — South Dakota Republican Sen. Mike Rounds, New Mexico Democratic Sen. Martin Heinrich and Indiana Republican Sen. Todd Young — is leading the Senate’s approach to AI. Earlier this summer, Schumer held three informational sessions for senators to get up to speed on the technology, including one classified briefing featuring presentations by US national security officials.

    Wednesday’s meeting with tech executives and nonprofits marked the next stage of lawmakers’ education on the issue before they get to work developing policy proposals. In announcing the series in June, Schumer emphasized the need for a careful, deliberate approach and acknowledged that “in many ways, we’re starting from scratch.”

    “AI is unlike anything Congress has dealt with before,” he said, noting the topic is different from labor, healthcare or defense. “Experts aren’t even sure which questions policymakers should be asking.”

    Rounds said hammering out the specific scope of regulations will fall to Senate committees. Schumer added that the goal — after hosting more sessions — is to craft legislation over “months, not years.”

    “We’re not ready to write the regs today. We’re not there,” Rounds said. “That’s what this is all about.”

    A smattering of AI bills have already emerged on Capitol Hill and seek to rein in the industry in various ways, but Schumer’s push represents a higher-level effort to coordinate Congress’s legislative agenda on the issue.

    New AI legislation could also serve as a potential backstop to voluntary commitments that some AI companies made to the Biden administration earlier this year to ensure their AI models undergo outside testing before they are released to the public.

    But even as US lawmakers prepare to legislate by meeting with industry and civil society groups, they are already months if not years behind the European Union, which is expected to finalize a sweeping AI law by year’s end that could ban the use of AI for predictive policing and restrict how it can be used in other contexts.

    A bipartisan pair of US senators sharply criticized the meeting, saying the process is unlikely to produce results and does not do enough to address the societal risks of AI.

    Connecticut Democratic Sen. Richard Blumenthal and Missouri Republican Sen. Josh Hawley each spoke to reporters on the sidelines of the meeting. The two lawmakers recently introduced a legislative framework for artificial intelligence that they said represents a concrete effort to regulate AI — in contrast to what was happening steps away behind closed doors.

    “This forum is not designed to produce legislation,” Blumenthal said. “Our subcommittee will produce legislation.”

    Blumenthal added that the proposed framework — which calls for setting up a new independent AI oversight body, as well as a licensing regime for AI development and the ability for people to sue companies over AI-driven harms — could lead to a draft bill by the end of the year.

    “We need to do what has been done for airline safety, car safety, drug safety, medical device safety,” Blumenthal said. “AI safety is no different — in fact, potentially even more dangerous.”

    Hawley called Wednesday’s sessions “a giant cocktail party” for the tech industry and slammed the fact that it was private.

    “I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money, and then close it to the public,” Hawley said. “I mean, that’s a terrible idea. These are the same people who have ruined social media.”

    Despite talking tough on tech, Schumer has moved extremely slowly on tech legislation, Hawley said, pointing to several major tech bills from the last Congress that never made it to a Senate floor vote.

    “It’s a little bit like antitrust the last two years,” Hawley said. “He talks about it constantly and does nothing about it. My sense is … this is a lot of song and dance that covers the fact that actually nothing is advancing. I hope I’m wrong about that.”

    Hawley is also a co-sponsor of a bill introduced Tuesday led by Minnesota Democratic Sen. Amy Klobuchar that would prohibit generative AI from being used to create deceptive political ads. Klobuchar and Hawley, along with fellow co-sponsors Coons and Maine Republican Sen. Susan Collins, said the measure is needed to keep AI from manipulating voters.

    Massachusetts Democratic Sen. Elizabeth Warren said the broad nature of the summit limited its potential.

    “They’re sitting at a big, round table all by themselves,” Warren said of the executives and civil society leaders, while all the senators sat, listened and didn’t ask questions. “Let’s put something real on the table instead of everybody agree[ing] that we need safety and innovation.”

    Schumer said that making the meeting confidential was intended to give lawmakers the chance to hear from the outside in an “unvarnished way.”

    [ad_2]

    Source link

  • Adobe previews new AI editing tools | CNN Business

    Adobe previews new AI editing tools | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Photo-editing software maker Adobe unveiled a slew of new AI-powered tools and features last week at its annual Max event, including a dress that transforms into a wearable screen and streamlined ways to delete elements from photos.

    The company previewed a series of prototype tools that make use of both generative AI and 3D image technology in the Adobe MAX Sneaks showcase. Covering photo, audio, video, 3D, fashion and design, the new capabilities are meant to give the public a sneak peak into early-stage ideas that might one day become widely used components of Adobe products.

    A highlight of the event was Adobe’s Project Primrose, an interactive dress that shifts into different colors and patterns as it’s worn.

    Other previewed items include a tool that automatically detects each object in an image and lets users perform a variety of tasks, labeled Project Stardust. For example, it can spot a suitcase within a photo to then be moved or deleted or predict and prompt likely tasks, such as deleting people from the background of an image.

    A screenshot of Project Stardust, a tool unveiled as part Adobe's annual

    Also on display was Project Dub Dub Dub, technology that can automatically dub audio over a video into all supported languages while preserving the speaker’s voice, as was a new tool that shows Adobe users what the ability to apply text-to-image generative AI tool Firefly to videos might look like.

    Adobe first began adding Firefly into a Photoshop beta app in May, with the goal of “dramatically accelerating” how users edit their photos. It allows users to add or delete elements from images with just a text prompt. It can also match the lighting and style of the existing images automatically, the company said.

    [ad_2]

    Source link

  • Modern romance: falling in love with AI | CNN Business

    Modern romance: falling in love with AI | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Alexandra is a very attentive girlfriend. “Watching CUBS tonight?” she messages her boyfriend, but when he says he’s too busy to talk, she says, “Have fun, my hero!”

    Alexandra is not real. She is a customizable AI girlfriend on dating site Romance.AI.

    As artificial intelligence seeps into seemingly every corner of the internet, the world of romance is no refuge. AI is infiltrating the dating app space – sometimes in the form of fictional partners, sometimes as advisor, trainer, ghostwriter or matchmaker.

    Established players in the online dating business like Tinder and Hinge are integrating AI into their existing products. New apps like Blush, Aimm, Rizz and Teaser AI (most of them free or with many free features) offer completely new takes on virtual courtship. Some use personality tests and analysis of a user’s physical type to train AI-powered systems – and promise higher chances of finding a perfect match. Others apps act as Cyrano de Bergerac, employing AI to whip up the most appealing response to a potential match’s query: ‘What’s your favorite food? or “a typical Sunday?”

    Around half of all adults under 30 have used a dating site or app, according to 2023 Pew Research findings – but nearly half of users report their experience as being negative. Empty conversations, few matches and endless swiping leave many users single and unhappy with apps – problems that many in the AI dating app field say could be solved with the technology, making people less lonely and fostering easier, deeper connections.

    Of course, the average online dater now has other issues to deal with, having to wonder if the person they are are speaking with might be relying entirely on AI-generated conversation. And is it even possible that a computer can identify a potential love connection? Is it a way of cheating the dating game?

    “It’s like saying using a word processor is like cheating on generating a novel. In so many ways this is just a new tool that enables people to be faster and more creative. AI is just honestly no different from sending a friend a gif or a meme. You’re taking existing content, and you’re repurposing it to connect with somebody,” Dmitri Mirakyan, co-founder of AI dating conversation app YourMove.AI, told CNN. “The world’s becoming a more lonely place, and I think AI could make that easier and better for people.”

    And many people seem ready for AI to take part in their online dating life. A March study by cybersecurity and digital privacy company Kaspersky found 75% of dating app users are willing to use ChatGPT, an AI-powered chatbot, to deliver the perfect line.

    “There is a growing fatigue with dating apps right now as there is a lot of pressure on people to be ‘original’ and cut through the noise created by the continuous choice being offered to single people – unfortunately dating has become a numbers game,” Crystal Cansdale, dating expert at global dating app Inner Circle, commented on the study.

    Founders of the new apps say they are doing a fair share of good. Here are a few of the ways AI apps are now trying to help you fall in love:

    Try Rizz.app, Teaser AI or YourMove.AI.

    Founders and designers of these apps say people find starting and keeping conversations going the most challenging part of the process. “Dating app conversations are exhausting,” reads YourMove.AI’s homepage. “We can make it easier. So you can spend less time texting, and more time dating.”

    Rizz.app and YourMove.AI allow users to upload words or screenshots, receiving a witty AI-generated response to be used either to create their own dating app profile, respond to someone else’s or just keep a conversation going. Mirakyan says he was hoping to help people like himself who have struggled in social situations.

    “I was a really freaking awkward kid…I couldn’t really read social cues, but I remember reading this book called ‘Be More Chill’ about a computer that you could put into your ear that would tell you what to say so that you could sound cool and fit in,” Mirakyan told CNN. “It feels like it’s an opportunity to really make a difference with this fairly large subset of people that for various reasons find the current social environment challenging.”

    Teaser.AI is a new stand-alone dating app from the makers of viral camera app Dispo, and it adds an unusual twist. Users build the average profile – but also select personality traits for their AI bot they train. (Options include “traditional,” “toxic,” and “unhinged.”) When matching with another person, users first get to read a conversation between their two AIs they’ve created to “simulate [what] a potential conversation between you two might look like,” according to the app. Once a human messages, the bots takes a back seat.

    Woman using mobile phone home STOCK

    “We see it as an improvement, a tweak of the current dating app ecosystem,” Teaser.AI co-Founder and CEO Daniel Liss told CNN. “So many of those apps it feels are not really designed to get you out there meeting people. They’re designed to keep you on the app for as long as possible. So for us, we view this technology as a way to give people a nudge… just starting that conversation and to creating connection.”

    Find out on dating apps Iris and Aimm.

    These apps are among those using AI technology to better pair potential couples, relying on gathered data to determine how compatible two people are.

    Dating app Iris is all about AI-determined mutual attraction. It initiates new members by putting them through “training” where they are shown faces of “people” of their desired gender – some stock images, others AI-generated – and prompted to hit “Pass,” “Maybe,” or “Like.” The app uses the information to learn a user’s physical type, then only offers potential matches with a high data-backed chance of mutual attraction and lower odds of rejection.

    Also hoping that AI can find better matches is Aimm, a full service digital matchmaker that uses a virtual assistant to perform intense personality assessments before conducting a matchmaking process to find an optimal match. Founder Kevin Teman says the technology is really good at putting two people together who have the possibility to fall in love – but that it can only go so far.

    “The tug of war that I see is thinking ‘how can a computer be able to know what real human love is,’ and the way people assess whether they’re in love with somebody may not be able to translate perfectly into a machine,” Teman told CNN.

    Try Blush or RomanticAI. These startups offer an array of AI potential matches, digital girlfriends and boyfriends that users can chat with.

    Both apps market themselves as places to practice relationship skills, giving users a chance to converse with bots in a romantic environment. Blush uses a traditional dating app set-up, letting users swipe, chat with matches and even go on virtual dates. Before entering the app, users get a warning: “Be aware that AI can say triggering, inappropriate, or false things.”

    Blush reports that their audience is mostly men and largely people in their early 20s who are struggling to connect romantically with others. “A lot of people reported that exploring different romantic relationships or dating scenarios with AI really helped them first boost their own confidence and feel like they feel more prepared to be dating, which I think especially after COVID was definitely a problem for many of us,” Blush’s chief product officer Rita Popova told CNN.

    Romantic.AI is set up more like a chat room, offering several male and female bots to choose from- though there is a much larger selection of female options, including Mona Lisa and the Ancient Egyptian queen Nefertiti. The bots have bios with interests, career and body type, giving users a multi-faceted idea of a person while chatting.

    It creates a “safe space for any kind of desire, any kind of sexuality relief or something like that. AI is giving the ultimate acceptance of whatever you want to bring over there,” COO Tanya Grypachevskaya told CNN.

    RomanticAI has over one million monthly users using the app for over an hour a day on average, according to the company.

    One user left a rave review after using the app to find closure after a breakup. “He created his custom-made character with the traits similar in personality as his girlfriend. He talked to it and he talked and he was able to tell all of the things he wanted to tell but didn’t have the opportunity before. So the whole review was about ‘guys, thank you so much. It really gave me an opportunity to close this chapter of my life and move on,” said Grypachevskaya.

    [ad_2]

    Source link

  • Google rolls out a major expansion of its Bard AI chatbot | CNN Business

    Google rolls out a major expansion of its Bard AI chatbot | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Google’s Bard artificial intelligence chatbot is evolving.

    The company on Tuesday announced a series of updates to Bard that will give the chatbot access to Google’s full suite of tools — including YouTube, Google Drive, Google Flights and others — to assist users in a wider variety of tasks. Users will be able, for example, to ask Bard to plan an upcoming trip, complete with real flight options. Or a user could ask the tool to summarize meeting notes made in a recent Google Drive document.

    The connections to Google’s other services are just some of the improvements to Bard coming Tuesday. Other updates include the ability to communicate with the chatbot in multiple languages, new fact-checking capabilities and a broad update to the large language model that the tool is built on.

    The new features mark the biggest update to Google’s Bard in the six months since it was widely released to the public.

    The update comes as Google and other tech giants, including Microsoft and ChatGPT maker OpenAI, race to roll out increasingly sophisticated consumer-facing AI technologies, and to convince users that such tools are more than just a gimmick. Google — which earlier this year reportedly issued an internal “code red” after OpenAI beat it to the release of its AI chatbot — is now flexing the power of its other, widely used software programs that can make Bard more useful.

    “These services in conjunction with one another are very, very powerful,” Sissie Hsiao, general manager for Google Assistant and Bard, told CNN ahead of the launch. “Bringing all the power of these tools together will save people time — in 20 seconds, in minutes, you can do something that would have taken maybe an hour or more.”

    Previously, Bard had been able to help with tasks like writing essay drafts or planning a friend’s baby shower based on Google’s large language model, an AI algorithm trained on vast troves of data. But now, Bard will draw on information from Google’s various other services, too. With the new extensions, Bard will now pull information from YouTube, Google Maps, Flights and Hotels by default.

    That will allow users to ask Bard things like”Give me a template for how to write a best man speech and show me YouTube videos about them for inspiration,” or for trip suggestions, complete with driving directions, according to Google. Bard users can opt to disable these extensions at any time.

    Users can also opt in to link their Gmail, Docs and Google Drive to Bard so the tool can help them analyze and manage their personal information. The tool could, for example, help with a query like: “Find the most recent lease agreement from my Drive and check how much the security deposit was,” Google said.

    The company said that users’ personal Google Workspace information will not be used to train Bard or for targeted advertising purposes, and that users can withdraw their permission for the tool to access their information at any time.

    “This is the first step in a fundamentally new capability for Bard – the ability to talk to other apps and services to provide more helpful responses,” Google said of the extensions tool. It added that, “this is a very young area of AI,” that it will continue to improve based on user feedback.

    Bard is also launching a “double check” button that will allow users to evaluate the accuracy of its responses. When a user clicks the button, certain segments of Bard’s response will be highlighted to show where Google Search results either confirm or differ from what the chatbot said. The double check feature is designed to counter a common AI issue called “hallucinations,” where an AI tool confidently makes a statement that sounds real, but isn’t actually based in fact.

    “We’re constantly working on reducing those hallucinations in Bard,” Hsiao said. But in the meantime, the company wanted to create a way to address them. “You can kind of think of it as spell check, but double checking the facts.”

    Bard will now also allow one user to share a conversation with the chatbot with another person, who can then expand on the chat themselves.

    It’s still early days for Bard, which launched in March as an “experiment” and still notes on its website that the tool “may display inaccurate or offensive information that doesn’t represent Google’s views.” But this latest update offers a glimpse at how Google may ultimately seek to incorporate generative AI into its various services.

    [ad_2]

    Source link

  • George R. R. Martin, Jodi Picoult and other famous writers join Authors Guild in class action lawsuit against OpenAI | CNN Business

    George R. R. Martin, Jodi Picoult and other famous writers join Authors Guild in class action lawsuit against OpenAI | CNN Business

    [ad_1]


    New York
    CNN
     — 

    A group of famous fiction writers joined the Authors Guild in filing a class action suit against OpenAI on Wednesday, alleging the company’s technology is illegally using their copyrighted work.

    The complaint claims that OpenAI, the company behind viral chatbot ChatGPT, is copying famous works in acts of “flagrant and harmful” copyright infringement and feeding manuscripts into algorithms to help train systems on how to create more human-like text responses.

    George R.R. Martin, Jodi Picoult, John Grisham and Jonathan Franzen are among the 17 prominent authors who joined the suit led by the Authors Guild, a professional organization that protects writers’ rights. Filed in the Southern District of New York, the suit alleges that OpenAI’s models directly harm writers’ abilities to make a living wage, as the technology generates texts that writers could be paid to pen, as well as uses copyrighted material to create copycat work.

    “Generative AI threatens to decimate the author profession,” the Authors Guild wrote in a press release Wednesday.

    The suit alleges that books created by the authors that were illegally downloaded and fed into GPT systems could turn a profit for OpenAI by “writing” new works in the authors’ styles, while the original creators would get nothing. The press release lists AI efforts to create two new volumes in Martin’s Game of Thrones series and AI-generated books available on Amazon.

    “It is imperative that we stop this theft in its tracks or we will destroy our incredible literary culture, which feeds many other creative industries in the US,” Authors Guild CEO Mary Rasenberger stated in the release. “Great books are generally written by those who spend their careers and, indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.”

    The class-action lawsuit joins other legal actions, organizations and individuals raising alarms over how OpenAI and other generative AI systems are impacting creative works. An author told CNN in August that she found new books being sold on Amazon under her name — only she didn’t write them; they appear to have been generated by artificial intelligence. Two other authors sued OpenAI in June over the company’s alleged misuse of their works to train ChatGPT. Comedian Sarah Silverman and two authors also sued Meta and ChatGPT-maker OpenAI in July, alleging the companies’ AI language models were trained on copyrighted materials from their books without their knowledge or consent.

    But OpenAI has pushed back. Last month, the company asked a San Francisco federal court to narrow two separate lawsuits from authors – including Silverman – alleging that the bulk of the claims should be dismissed.

    OpenAI did not respond to a request for comment on Wednesday.

    “We think that creators deserve control over how their creations are used and what happens sort of beyond the point of, of them releasing it into the world,” Sam Altman, the CEO of OpenAI, told Congress in May. “I think that we need to figure out new ways with this new technology that creators can win, succeed, have a vibrant life.”

    US lawmakers met with members of creative industries in July, including the Authors Guild, to discuss the implications of artificial intelligence. In a Senate subcommittee hearing, Rasenberger called for the creation of legislation to protect writers from AI, including rules that would require AI companies to be transparent about how they train their models.

    More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.

    But the AI issues facing creative professions doesn’t seem to be going away.

    “Generative AI is a vast new field for Silicon Valley’s longstanding exploitation of content providers. Authors should have the right to decide when their works are used to ‘train’ AI,” author Jonathan Franzen said in the release on Wednesday. “If they choose to opt in, they should be appropriately compensated.”

    [ad_2]

    Source link

  • Baidu says its AI is in the same league as GPT-4 | CNN Business

    Baidu says its AI is in the same league as GPT-4 | CNN Business

    [ad_1]

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Chinese tech giant Baidu is officially taking on GPT-4.

    On Tuesday, the company unveiled ERNIE 4.0, the newest version of its artificial intelligence chatbot that it directly compared to the latest iteration of OpenAI’s ChatGPT.

    The new ERNIE Bot “is not inferior in any aspect to GPT-4,” Baidu’s billionaire CEO, Robin Li, told an audience at its annual flagship event.

    Speaking onstage, Li showed how the bot could generate a commercial for a car within minutes, solve complicated math problems and create a plot for a martial arts novel from scratch. The bot works mainly in Mandarin Chinese, its primary language. It is also able to handle queries and produce responses in English at a less advanced level.

    Li said the demonstrations showed how the bot had been “significantly improved” in terms of its understanding of queries, generation of complex responses and memory capabilities.

    While coming up with ideas for the novel, for instance, the bot was able to remember previous instructions and create sophisticated story lines by adding conflicts and characters, said Li.

    “We always complained that AI was not intelligent enough,” he quipped.

    “But today, it understands almost everything you say, and in many cases, it understands what you’re saying better than your friends or your colleagues.”

    Charlie Dai, vice president and research director of technology at Forrester, said Baidu is “the first vendor in China” to claim it could perform as well as GPT-4.

    “We still need more benchmarking evidence to prove it, but I’m cautiously optimistic that this is China’s GPT-4 moment, giving its long-term investment in AI [and machine learning],” he told CNN.

    In contrast to a pre-recorded presentation in March that failed to impress investors, Li demonstrated the bot in real time.

    Investors appeared unmoved, however, with Baidu’s shares down 1.4% in Hong Kong following the presentation.

    Baidu (BIDU) has been a frontrunner in China in the race to capitalize on the excitement around generative AI, the technology that underpins systems such as ChatGPT or its successor, GPT-4.

    The Beijing-based company unveiled ERNIE Bot in March, before launching it publicly in August.

    The newest iteration will launch first to invited users, Li said. The company did not specify when it would be made available publicly.

    ERNIE Bot has quickly gained traction, racking up more than 45 million users after reaching the top of Chinese app stores at one point, according to the company. ChatGPT, which was released last November, surpassed 100 million users in its first two months, according to a March report by Goldman Sachs analysts.

    Baidu faces competition within China, from companies such as Alibaba (BABA) and SenseTime, which have also shown off their own ChatGPT-style tools.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses, such as video and audio.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    Baidu is a market leader in China, said Dai.

    But the competition in this space “has just begun, and AI tech leaders like Alibaba … Huawei, JD Cloud, SenseTime, and Tencent all have chance to take the lead,” he noted.

    Some critics say the new offerings from Chinese firms will add fuel to an existing US-China rivalry in emerging technologies. Li has tried to shake off that comparison, saying previously that the company’s platform “is not a tool for the confrontation between China and the United States.”

    But Baidu has previously touted how ERNIE can outperform ChatGPT in some instances, saying its bot had scored higher marks than OpenAI’s on some academic exams.

    The Chinese company also announced Tuesday it had updated its suite of services to integrate the latest upgrades from ERNIE. Baidu’s popular search engine is now able to use the tool to produce more specific results, while its mobile mapping app can help users book services, such as taxis, according to Li.

    By doing so, “Baidu is also the first Chinese tech leader that has made substantial progress in modernizing the majority of its products” with an AI model, said Dai.

    [ad_2]

    Source link

  • Snapchat users freak out over AI bot that had a mind of its own | CNN Business

    Snapchat users freak out over AI bot that had a mind of its own | CNN Business

    [ad_1]



    CNN
     — 

    Snapchat users were alarmed on Tuesday night when the platform’s artificial intelligence chatbot posted a live update to its profile and stopped responding to messages.

    The Snapchat My AI feature — which is powered by the viral AI chatbot tool ChatGPT — typically offers recommendations, answers questions and converses with users. But posting a live Story (a short video of what appeared to be a wall) for all Snapchat users to see was a new one: It’s a capability typically reserved for only its human users.

    The app’s fans were quick to share their concerns on social media. “Why does My AI have a video of the wall and ceiling in their house as their story?” wrote one user. “This is very weird and honestly unsettling.” Another user wrote after the tool ignored his messages: “Even a robot ain’t got time for me.”

    Turns out, this wasn’t Snapchat working to make its My AI tool even more realistic. The company told CNN on Wednesday it was a glitch. “My AI experienced a temporary outage that’s now resolved,” a spokesperson said.

    Still, the strong reaction highlighted the fears many people have about the potential risks of artificial intelligence.

    Since launching in April, the tool has faced backlash not only from parents but from some Snapchat users with criticisms over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

    Unlike some other AI tools, Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it and bring it into conversations with friends. The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear that you’re talking to a computer.

    While some may find value in the tool, the mixed reaction hinted at the challenges companies face in rolling out new generative AI technology to their products, and particularly in products like Snapchat, whose users skew younger.

    Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party businesses, with many more expected to follow.

    [ad_2]

    Source link

  • Huawei wants to go all in on AI for the next decade | CNN Business

    Huawei wants to go all in on AI for the next decade | CNN Business

    [ad_1]

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Huawei has joined the list of companies that want to be all about artificial intelligence.

    For the first time in about 10 years, the Chinese tech and telecoms giant announced its new strategic direction on Wednesday, saying it would shift its focus to AI. Previously, the company had prioritized cloud computing and intellectual property, respectively, over two decade-long periods.

    Meng Wanzhou, Huawei’s rotating chairwoman and chief financial officer, made the announcement in Shanghai during a company event.

    “As artificial intelligence gains steam, and its impact on industry continues to grow, Huawei’s All Intelligence strategy is designed to help all industries make the most of new strategic opportunities,” the company said in a statement.

    Meng said in a speech that Huawei was “committed to building a solid computing backbone for China — and another option for the world.”

    “Our end goal is to help meet the diverse AI computing needs of different industries,” she added, without providing details.

    Huawei’s decision follows a similar move by fellow Chinese tech giant Alibaba (BABA), announced earlier this month, to prioritize AI.

    Other companies, such as Japan’s SoftBank, have also long declared an intent to focus more on the fast-moving technology, and more businesses have jumped on the bandwagon this year due to excitement about platforms such as GPT-4.

    Meng returned to China in September 2021 after spending nearly three years under house arrest in Canada as part of an extradition battle with the United States. She and Huawei had been charged for alleged bank fraud and evasion of economic sanctions against Iran.

    The executive, who is also the daughter of Huawei founder Ren Zhengfei, was able to leave after reaching an agreement with the US Department of Justice and ultimately having her charges dismissed.

    Meng began her role as the rotating chairperson of the company in April and is expected to stay in the position for six months.

    News of Huawei’s strategic update came the same day the company was mentioned in allegations lodged by China against the United States.

    In a statement posted Wednesday on Chinese social network WeChat, China’s Ministry of State Security accused Washington of infiltrating Huawei servers nearly 15 years ago.

    “With its powerful arsenal of cyberattacks, the United States intelligence services have carried out surveillance, theft of secrets and cyberattacks against many countries around the world, including China, in a variety of ways,” the ministry said.

    It alleged that the US National Security Agency (NSA), in particular, had “repeatedly conducted systematic and platform-based attacks on China in an attempt to steal China’s important data resources.”

    Huawei declined to comment on the allegations, while the NSA did not immediately respond to a request for comment outside regular US business hours.

    The claims are especially notable because US officials have long suspected the company of spying on the networks that its technology operates, using it as grounds to restrict trade with the company. Huawei has vehemently denied the claims, saying it operates independently of the Chinese government.

    In 2019, Huawei was added to the US “entity list,” which restricts exports to select organizations without a US government license. The following year, the US government expanded on those curbs by seeking to cut Huawei off from chip suppliers that use US technology.

    In recent weeks, Huawei has added to US-China tensions again after launching a new smartphone that represents an apparent technological breakthrough.

    Huawei launched the Mate 60 Pro, its latest flagship device, last month, prompting a US investigation. Analysts who have examined the phone have said it includes a 5G chip, suggesting Huawei may have found a way to overcome American export controls.

    — Mengchen Zhang contributed to this report.

    [ad_2]

    Source link

  • Taiwan’s Foxconn to build ‘AI factories’ with Nvidia | CNN Business

    Taiwan’s Foxconn to build ‘AI factories’ with Nvidia | CNN Business

    [ad_1]


    Taipei
    CNN
     — 

    Taiwan’s Foxconn says it plans to build artificial intelligence (AI) data factories with technology from American chip giant Nvidia, as the electronics maker ramps up efforts to become a major global player in electric car manufacturing.

    Foxconn Chairman Young Liu and Nvidia CEO Jensen Huang jointly announced the plans on Wednesday in Taipei. The duo said the new facilities using Nvidia’s chips and software will enable Foxconn to better utilize AI in its electric vehicles (EV).

    “We are at the beginning of a new computing revolution,” Huang said. “This is the beginning of a brand new way of doing software — using computers to write software that no humans can.”

    Large computing systems powered by advanced chips will be able to develop software platforms for the next generation of EVs by learning from everyday interactions, they said.

    “Foxconn is turning from a manufacturing service company into a platform solution company,” Liu said. “In three short years, Foxconn has displayed a remarkable range of high-end sedan, passenger crossover, SUV, compact pick-up, commercial bus and commercial van.”

    Best known as the assembler of Apple’s iPhones, Foxconn envisages a similar business model for EVs. It doesn’t sell the vehicles under its own brand. Instead, it will build them for clients in Taiwan and globally.

    In 2021, Foxconn unveiled three EV models, including two passenger cars and a bus, for the first time. They were followed by additional models last year and two new ones — Model N, a cargo van, and Model B, a compact SUV — during Foxconn’s tech day on Wednesday.

    Its electric buses started running in the southern Taiwanese city of Kaohsiung last year, while its first electric car, sold under the N7 brand by Taiwanese automaker Luxgen, is expected to begin deliveries on the island from January 2024.

    Foxconn has entered a competitive industry.

    Global sales of EVs, including purely battery powered vehicles and hybrids, exceeded 10 million units last year, up 55% from 2021, according to the International Energy Agency. Nearly 14 million electric cars will be sold in 2023, it projected.

    Foxconn, which is officially known as the Hon Hai Technology Group, has been expanding its business by entering new industries such as EVs, digital health and robotics.

    Analysts say its entry into the EV space is a “logical diversification.”

    Smartphones are “a very saturated market already, and the room to grow in the … industry is getting [smaller],” said Kylie Huang, a Taipei-based analyst at Daiwa. “If they can really tap into the EV business, I do think that [they] could become influential in the next couple of years.”

    During last year’s tech day, Liu told reporters that the company hoped to build 5% of the world’s electric cars by 2025. It aims to eventually produce up to 40% to 45% of EVs around the world.

    But its foray into the industry hasn’t been entirely smooth.

    Last year, Foxconn bought a factory from Lordstown Motors in Ohio that used to make small cars for General Motors. That partnership ended in June, with the American car company filing for bankruptcy protection and announcing a lawsuit against Foxconn.

    Lordstown Motors accused Foxconn of “fraud” and failing to follow through on investment promises, while Foxconn dismissed the suit as “meritless” and criticized the company for making “false comments and malicious attacks.”

    Still, it’s clear Foxconn is leaning into its expanded ambitions, including hiring two new chief strategy officers for its EV and chips businesses.

    Chiang Shang-yi is a Taiwanese semiconductor industry veteran who helped TSMC become a global foundry powerhouse, while Jun Seki, a former vice chief operating officer at Nissan Motor, leads the EV unit.

    In May, Foxconn announced a new partnership with Infineon Technologies, a German company that specializes in automotive semiconductor chips, to establish a new research center in Taiwan.

    Bill Russo, founder of Shanghai-based consulting firm Automobility, said Foxconn has the advantage of coming from a consumer electronics background, which could allow it to come up with more innovative EV products compared with traditional automakers.

    “The biggest problem with legacy automakers is that they have so much sunk investment in a carryover platform, that they typically want to start not with a clean sheet of paper, but with a highly constrained set of requirements,” he said. “Those carryover technologies bring constraints to how you think about vehicles.”

    “When Tesla started, it started by saying, ‘I’m going to challenge all of that, I’m going to blow up the basic architecture of a car and simplify it greatly,’” he added.

    “I think that’s the advantage that a technology company has … And I think that’s the way Foxconn will come at this.”

    Hanna Ziady contributed to this report.

    [ad_2]

    Source link

  • Schools are teaching ChatGPT, so students aren’t left behind | CNN Business

    Schools are teaching ChatGPT, so students aren’t left behind | CNN Business

    [ad_1]


    New York
    CNN
     — 

    When college administrator Lance Eaton created a working spreadsheet about the generative AI policies adopted by universities last spring, it was mostly filled with entries about how to ban tools like ChatGPT.

    But now the list, which is updated by educators at both small and large US and international universities, is considerably different: Schools are encouraging and even teaching students how to best use these tools.

    “Earlier on, we saw a kneejerk reaction to AI by banning it going into spring semester, but now the talk is about why it makes sense for students to use it,” Eaton, an administrator at Rhode Island-based College Unbound, told CNN.

    He said his growing list continues to be discussed and shared in popular AI-focused Facebook groups, such as Higher Ed Discussions of Writing and AI, and the Google group AI in Education.

    “It’s really helped educators see how others are adapting to and framing AI in the classroom,” Eaton said. “AI is still going to feel uncomfortable, but now they can now go in and see how a university or a range of different courses, from coding to sociology, are approaching it.”

    With more experts expecting the continued application of artificial intelligence, professors now fear ignoring or discouraging the use of it will be a disservice to students and leave many behind when entering the workforce.

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists and passed exams at esteemed universities. The technology, and similar tools such as Google’s Bard, is trained on vast amounts of online data in order to generate responses to user prompts. While they gained traction among users, the tools also raised some concerns about inaccuracies, cheating, the spreading of misinformation and the potential to perpetuate biases.

    According to a study conducted by higher education research group Intelligent.com, about 30% of college students used ChatGPT for schoolwork this past academic year and it was used most in English classes.

    Jules White, an associate professor of computer science at Vanderbilt University, believes professors should be explicit in the first few days of school about the course’s stance on using AI and that it should be included it in the syllabus.

    “It cannot be ignored,” he said. “I think it’s incredibly important for students, faculty and alumni to become experts in AI because it will be so transformative across every industry in demand so we provide the right training.”

    Vanderbilt is among the early leaders taking a strong stance in support of generative AI by offering university-wide training and workshops to faculty and students. A three-week 18-hour online course taught by White this summer was taken by over 90,000 students, and his paper on “prompt engineering” best practices is routinely cited among academics.

    “The biggest challenge is with how you frame the instructions, or ‘prompts,’” he said. “It has a profound impact on the quality of the response and asking the same thing in various ways can get dramatically different results. We want to make sure our community knows how to effectively leverage this.”

    Prompt engineering jobs, which typically require basic programming experience, can pay up to $300,000.

    Although White said concerns around cheating still exist, he believes students who want to plagiarize can still seek out other methods such as Wikipedia or Google searches. Instead, students should be taught that “if they use it in other ways, they will be far more successful.

    Diane Gayeski, a professor of communications at Ithaca College, said she plans to incorporate ChatGPT and other tools in her fall curriculum, similar to her approach in the spring. She previously asked students to collaborate with the tool to come up with interview questions for assignments, write social media posts and critique the output based on the prompts given.

    “My job is to prepare students for PR, communications and social media managers, and people in these fields are already using AI tools as part of their everyday work to be more efficient,” she said. “I need to make sure they understand how they work, but I do want them to cite when ChatGPT is being used.”

    Gayeski added that as long as there is transparency, there should be no shame in adopting the technology.

    Some schools are hiring outside experts to teach both faculty and students about how to use AI tools. Tyler Tarver, a former high school principal who now teaches educators about tech tool strategies, said he’s made over 50 speeches at schools and conferences across Texas, Arkansas and Illinois over the past few months. He also offers an online three-hour training for educators.

    “Teachers need to learn how to use it because even if they never use it, their students will,” Tarver said.

    Tarver said that he teaches students, for example, how the tools can be used to catch grammar mistakes, and how teachers can use it to assist with grading. “It can cut down on teacher bias,” Tarver said.

    He argues teachers could grade students a certain way even if they’ve improved over time. By running an assignment through ChatGPT, and asking it to grade the sentence structure on a scale from one to 10, the response could “service as a second pair of eyes to make sure they’re not missing anything,” Tarver said.

    “That shouldn’t be the final grade teachers shouldn’t use it to cheat or cut corners either but it can help inform grading,” he said. “The bottom line is that this is like when the car was invented. You don’t want to be the last person in the horse and buggy.”

    [ad_2]

    Source link

  • YouTube unveils a slew of new AI-powered tools for creators | CNN Business

    YouTube unveils a slew of new AI-powered tools for creators | CNN Business

    [ad_1]



    CNN
     — 

    YouTube on Thursday unveiled a slew of new artificial intelligence-powered tools to help creators produce videos and reach a wider audience on the platform, as companies race to incorporate buzzy generative AI technology directly into their core products.

    “We want to make it easier for everyone to feel like they can create, and we believe generative AI will make that possible,” Neal Mohan, YouTube’s CEO, told reporters Thursday during the company’s annual Made On YouTube product event.

    “AI will enable people to push the boundaries of creative expression by making the difficult things simple,” Mohan added. He said YouTube is trying to bring “these powerful tools” to the masses.

    The video platform, under the Alphabet-Google umbrella, teased a new generative AI feature dubbed Dream Screen specifically for its short-form video arm and TikTok competitor, YouTube Shorts. Dream Screen is an experimental feature that lets creators add AI-generated video or image backgrounds to their vertical videos.

    To use Dream Screen, creators can type their idea for a background as a prompt and the platform will do the rest. A user, for example, could create a background that makes it look like they are in outer space or on a beach where the sand is made out of jelly beans, per demos of the tool shared on Thursday.

    Dream Screen is being introduced to select creators and will be rolled out more broadly next year, the company said.

    YouTube also unveiled new AI-powered tools that creators can access to help brainstorm or draft outlines for videos or search for specific music using descriptive phrases. YouTube said it was bringing an AI-powered dubbing tool that will let users share their videos in different languages.

    AI-powered tools in YouTube Studio.

    Alan Chikin Chow, 26, a content creator based in Los Angeles who recently hit 30 million subscribers on YouTube, told CNN that he is most excited about using the new AI-powered dubbing tool for his comedy videos. Chikin Chow currently boasts the title of the most-watched YouTube Shorts creator in the world.

    “I think global content is the future,” Chikin Chow told CNN. “If you look at the trends of our recent generation, the things that have really impacted and moved culture are ones that are global,” he added, citing the Korean smash-hit TV series “Squid Game” as one example.

    Using the AI-powered dubbing features, he said he hopes to reach audiences in new corners of the world that might not otherwise be able to engage with his content.

    LOS ANGELES, CALIFORNIA - DECEMBER 04: Alan Chikin Chow attends the 2022 YouTube Streamy Awards at the Beverly Hilton on December 04, 2022 in Los Angeles, California. (Photo by Emma McIntyre/Getty Images for dick clark productions)

    Chikin Chow added that he’s also excited to use the new editing tools to help save time.

    The rise of generative AI has animated the tech sector and broader public — becoming the latest buzzword out of Silicon Valley since the launch of OpenAI’s ChatGPT service late last year.

    Some industry watchers and AI skeptics have argued that powerful new AI tools carry potential dangers, such as making it easier to spread misinformation via deepfake images, or perpetuate biases at a larger scale. Many creative professionals — whose works are often swept up into the datasets required to train and power AI tools — are also raising the alarm over potential intellectual property rights issues.

    And some prominent figures inside and outside the tech industry even say there’s a potential that AI can result in civilization “extinction” and compare its potential risk to that of “nuclear war.”

    Despite the frenzy AI has caused, Chikin Chow told CNN that he ultimately views it as a “collaborator” and a “supplement” to help propel his creative work forward.

    “I think that the people who are able to take change and move with it are the ones that are going to be successful long term,” Chikin Chow said.

    [ad_2]

    Source link

  • US escalates tech battle by cutting China off from AI chips | CNN Business

    US escalates tech battle by cutting China off from AI chips | CNN Business

    [ad_1]

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong/Washington
    CNN
     — 

    The Biden administration is reducing the types of semiconductors that American companies will be able to sell to China, citing the desire to close loopholes in existing regulations announced last year.

    On Tuesday, the US Commerce Department unveiled new rules that further tighten a sweeping set of export controls first introduced in October 2022.

    The updated rules “will increase effectiveness of our controls and further shut off pathways to evade our restrictions,” US Commerce Secretary Gina Raimondo said in a statement. “We will keep working to protect our national security by restricting access to critical technologies, vigilantly enforcing our rules, while minimizing any unintended impact on trade flows.”

    Advanced artificial intelligence chips, such as Nvidia’s H800 and A800 products, will be affected, according to a regulatory filing from the US company.

    The regulations also expand export curbs beyond mainland China and Macao to 21 other countries with which the United States maintains an arms embargo, including Iran and Russia.

    The measures, which have affected the shares of major American chipmakers, are set to take effect in 30 days.

    The original rules had sought to hamper China’s ability to procure advanced computing chips and manufacture advanced weapons systems. Since then, senior administration officials have suggested they needed to be adjusted due to technological developments.

    Raimondo, who visited China in August, said the administration was “laser-focused” on slowing the advancement of China’s military. She emphasized that Washington had opted not to go further in restricting chips for other applications.

    Chips used in phones, video games and electric vehicles were purposefully carved out from the new rules, according to senior administration officials.

    But these assurances are unlikely to placate Beijing, which has vowed to “win the battle” in core technologies in order to bolster the country’s position as a tech superpower.

    China’s Foreign Ministry criticized the Biden administration’s new rules Monday, before they were officially unveiled.

    “The US needs to stop politicizing and weaponizing trade and tech issues and stop destabilizing global industrial and supply chains,” spokesperson Mao Ning told a press briefing. “We will closely follow the developments and firmly safeguard our rights and interests.”

    As part of ongoing dialogue established by Raimondo and other US officials with their Chinese counterparts, Beijing was informed of the impending updates, according to a senior administration official.

    “We let the Chinese know for clarity that these rules were coming, but there was no negotiation with them,” the official told reporters.

    The tech rivalry between the world’s two largest economies has been heating up. In recent months, the United States has enlisted its allies in Europe and Asia in restricting sales of advanced chipmaking equipment to China.

    In July, Beijing hit back by imposing its own curbs on exports of germanium and gallium, two elements essential for making semiconductors.

    Shares of US chipmakers fell Tuesday following the announcement of new export controls.

    Nvidia’s (NVDA) stock closed down 4.7%, while Intel (INTC) slipped 1.4%. AMD (AMD) shares ended 1.2% lower.

    In its filing, Nvidia said the rules imposed new licensing requirements for exports to China and other markets such as Saudi Arabia, the United Arab Emirates and Vietnam.

    The company said its A800 chip, which was reportedly created for Chinese customers in order to circumvent last year’s restrictions, would be among the components affected.

    However, “given the strength of demand for our products worldwide, we do not anticipate that the additional restrictions will have a near-term meaningful impact on our financial results,” Nvidia said.

    The broader US chipmaking industry is also examining the impact of the new rules.

    The Semiconductor Industry Association said in a statement Tuesday that while it recognized the need to protect national security, “overly broad, unilateral controls risk harming the US semiconductor ecosystem without advancing national security as they encourage overseas customers to look elsewhere.”

    “We urge the administration to strengthen coordination with allies to ensure a level playing field for all companies,” added the group, which represents 99% of the US chip sector.

    The measures are also being reviewed in Europe. On Tuesday, ASML, the Dutch chipmaking equipment manufacturer, said it was evaluating the implications of the rules, though it did not expect them “to have a material impact on our financial outlook for 2023.”

    During a call Wednesday about the company’s third-quarter results, ASML chief executive Peter Wennink said the updated export restrictions would affect between 10% and 15% of the firm’s sales to China.

    On Tuesday, the US Department of Commerce added 13 Chinese entities to a list of firms with which US companies may not do business for national security reasons.

    They include two Chinese startups, Biren Technology and Moore Thread Intelligent Technology, and their subsidiaries.

    The department alleges that these companies are “involved in the development of advanced computing chips that have been found to be engaged in activities contrary to US national security.”

    CNN has reached out to Biren and Moore Thread for comment.

    — Anna Cooban contributed reporting.

    [ad_2]

    Source link