ReportWire

Tag: iab-technology industry

  • So long, robotic Alexa. Amazon’s voice assistant gets more human-like with generative AI | CNN Business

    So long, robotic Alexa. Amazon’s voice assistant gets more human-like with generative AI | CNN Business

    [ad_1]



    CNN
     — 

    Amazon’s Alexa is about to bring generative AI inside the house, as the company introduces sweeping changes to how its ubiquitous voice assistant both sounds and functions.

    The company announced a generative AI update for Alexa and, subsequently, of all Echo products dating back to 2014, at a press event Wednesday at its new campus in Arlington, Virginia. Alexa will be able to resume conversations without a wake word, respond more quickly, learn user preferences, field follow-up questions and change its tone based on the topic. Alexa will even offer opinions, such as which movies should have won an Oscar but didn’t.

    Generative AI refers to a type of artificial intelligence that can create new content, such as text and images, in response to user prompts.

    “It feels just like talking to a human being,” an Amazon executive claimed.

    The updates come as Amazon tries to keep pace with a new wave of conversational AI tools that have accelerated the artificial intelligence arms race in the tech industry and rapidly reshaped what consumers may expect from their tech products. The company did not disclose when the updates will make their way into products.

    In a live demo, Dave Limp, senior VP of devices and services at Amazon, asked Alexa about his favorite college football team without ever stating the name. (Limp said he previously told Alexa and it remembered). If his favorite team wins, Alexa responds joyfully; if they lose, Alexa will respond with empathy.

    When Limp said “Alexa, let’s chat,” it launched a special mode that allowed for a back-and-forth exchange on various topics. Notably, Limp paused several times to address the audience and resumed the conversation with Alexa without using the “Alexa” wake word, picking up where they left off.

    The demo wasn’t without hiccups – Alexa’s response time at times lagged – but the voice assistant had far more personality, spoke in a more natural and expressive tone, and kept the conversation flowing back and forth.

    Although the company did not outline specific safeguards – some other large-language models have previously gone off the rails – it said on its website “it will design experiences to protect our customers’ privacy and security, and to give them control and transparency.”

    The company also said new developer tools will allow companies to work alongside its large-language model. In a blog post, Amazon said it is already partnering with a handful of companies, such as BMW, to develop conversational in-car voice assistant capabilities.

    Rowan Curran, an analyst at Forrester Research, said the news marks a major step forward in bringing generative AI to the home and allowing it to accomplish everyday tasks. By connecting speech-to-text to external systems and by using a large language model to understand and produce natural speech, this is “where we can begin to see the future of how we will use this technology near-ubiquitously in our everyday lives.”

    Some US users will get access to the changes through a free preview on existing Echo devices. Over the years, Alexa has been infused in countless Echo products, from its speaker and hub lineup to clocks, microwaves,and eyeglasses.

    Amazon also said it will be bringing generative AI to its Fire TV platform, allowing users to ask more natural, nuanced or open-ended questions about genres, storylines and scenes or make more targeted content suggestions.

    Alexa launched nearly a decade ago and, along with Apple’s Siri, Microsoft’s Cortana, and other voice assistants, were promised to change the way people interacted with technology. But the viral success of ChatGPT has arguably accomplished some of those goals faster and across a wider range of everyday products.

    The effort to continue updating the technology that powers Alexa comes at a difficult moment for Amazon. Like other Big Tech companies, Amazon has slashed staff in recent months and shelved products in an urgent effort to cut costs amid broader economic uncertainty. The Alexa division did not escape unscathed.

    Amazon confirmed plans in January to lay off more than 18,000 employees. In March, the company said about 9,000 more jobs would be impacted. Limp previously told CNN his division lost about 2,000 people, about half of which were from the Alexa team.

    Still, he emphasized innovation around Alexa has not stalled. “We’re not done and won’t be done until Alexa is as good or better than the ‘Star Trek’ computer,” Limp said. “And to be able to do that, it has to be conversational. It has to know all. It has to be the true source of knowledge for everything.”

    [ad_2]

    Source link

  • SoftBank CEO says artificial general intelligence will come within 10 years | CNN Business

    SoftBank CEO says artificial general intelligence will come within 10 years | CNN Business

    [ad_1]


    Tokyo
    Reuters
     — 

    SoftBank CEO Masayoshi Son said he believes artificial general intelligence (AGI), artificial intelligence that surpasses human intelligence in almost all areas, will be realized within 10 years.

    Speaking at the SoftBank World corporate conference, Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence. He noted the rapid progress in generative AI that he said has already exceeded human intelligence in certain areas.

    “It is wrong to say that AI cannot be smarter than humans as it is created by humans,” he said. “AI is now self learning, self training, and self inferencing, just like human beings.”

    Son has spoken of the potential of AGI — typically using the term “singularity” — to transform business and society for some years, but this is the first time he has given a timeline for its development.

    He also introduced the idea of “Artificial Super Intelligence” at the conference which he claimed would be realized in 20 years and would surpass human intelligence by a factor of 10,000.

    Son is known for several canny bets that have turned SoftBank into a tech investment giant as well as some bets that have spectacularly flopped.

    He’s also prone to making strident claims about the transformative impact of new technologies. His predictions about the mobile internet have been largely borne out while those about the Internet of Things have not.

    Son called upon Japanese companies to “wake up” to the promise of AI, arguing they had increasingly fallen behind in the internet age and reiterated his belief in chip designer Arm as core to the “AI revolution.”

    Arm CEO Rene Haas, speaking at the conference via video, touted the energy efficiency of Arm’s designs, saying they would become increasingly sought after to power artificial intelligence.

    Son said he thinks he is the only person who believes AGI will come within a decade. Haas said he thought it would come in his lifetime.

    [ad_2]

    Source link

  • Taiwan’s Foxconn to build ‘AI factories’ with Nvidia | CNN Business

    Taiwan’s Foxconn to build ‘AI factories’ with Nvidia | CNN Business

    [ad_1]


    Taipei
    CNN
     — 

    Taiwan’s Foxconn says it plans to build artificial intelligence (AI) data factories with technology from American chip giant Nvidia, as the electronics maker ramps up efforts to become a major global player in electric car manufacturing.

    Foxconn Chairman Young Liu and Nvidia CEO Jensen Huang jointly announced the plans on Wednesday in Taipei. The duo said the new facilities using Nvidia’s chips and software will enable Foxconn to better utilize AI in its electric vehicles (EV).

    “We are at the beginning of a new computing revolution,” Huang said. “This is the beginning of a brand new way of doing software — using computers to write software that no humans can.”

    Large computing systems powered by advanced chips will be able to develop software platforms for the next generation of EVs by learning from everyday interactions, they said.

    “Foxconn is turning from a manufacturing service company into a platform solution company,” Liu said. “In three short years, Foxconn has displayed a remarkable range of high-end sedan, passenger crossover, SUV, compact pick-up, commercial bus and commercial van.”

    Best known as the assembler of Apple’s iPhones, Foxconn envisages a similar business model for EVs. It doesn’t sell the vehicles under its own brand. Instead, it will build them for clients in Taiwan and globally.

    In 2021, Foxconn unveiled three EV models, including two passenger cars and a bus, for the first time. They were followed by additional models last year and two new ones — Model N, a cargo van, and Model B, a compact SUV — during Foxconn’s tech day on Wednesday.

    Its electric buses started running in the southern Taiwanese city of Kaohsiung last year, while its first electric car, sold under the N7 brand by Taiwanese automaker Luxgen, is expected to begin deliveries on the island from January 2024.

    Foxconn has entered a competitive industry.

    Global sales of EVs, including purely battery powered vehicles and hybrids, exceeded 10 million units last year, up 55% from 2021, according to the International Energy Agency. Nearly 14 million electric cars will be sold in 2023, it projected.

    Foxconn, which is officially known as the Hon Hai Technology Group, has been expanding its business by entering new industries such as EVs, digital health and robotics.

    Analysts say its entry into the EV space is a “logical diversification.”

    Smartphones are “a very saturated market already, and the room to grow in the … industry is getting [smaller],” said Kylie Huang, a Taipei-based analyst at Daiwa. “If they can really tap into the EV business, I do think that [they] could become influential in the next couple of years.”

    During last year’s tech day, Liu told reporters that the company hoped to build 5% of the world’s electric cars by 2025. It aims to eventually produce up to 40% to 45% of EVs around the world.

    But its foray into the industry hasn’t been entirely smooth.

    Last year, Foxconn bought a factory from Lordstown Motors in Ohio that used to make small cars for General Motors. That partnership ended in June, with the American car company filing for bankruptcy protection and announcing a lawsuit against Foxconn.

    Lordstown Motors accused Foxconn of “fraud” and failing to follow through on investment promises, while Foxconn dismissed the suit as “meritless” and criticized the company for making “false comments and malicious attacks.”

    Still, it’s clear Foxconn is leaning into its expanded ambitions, including hiring two new chief strategy officers for its EV and chips businesses.

    Chiang Shang-yi is a Taiwanese semiconductor industry veteran who helped TSMC become a global foundry powerhouse, while Jun Seki, a former vice chief operating officer at Nissan Motor, leads the EV unit.

    In May, Foxconn announced a new partnership with Infineon Technologies, a German company that specializes in automotive semiconductor chips, to establish a new research center in Taiwan.

    Bill Russo, founder of Shanghai-based consulting firm Automobility, said Foxconn has the advantage of coming from a consumer electronics background, which could allow it to come up with more innovative EV products compared with traditional automakers.

    “The biggest problem with legacy automakers is that they have so much sunk investment in a carryover platform, that they typically want to start not with a clean sheet of paper, but with a highly constrained set of requirements,” he said. “Those carryover technologies bring constraints to how you think about vehicles.”

    “When Tesla started, it started by saying, ‘I’m going to challenge all of that, I’m going to blow up the basic architecture of a car and simplify it greatly,’” he added.

    “I think that’s the advantage that a technology company has … And I think that’s the way Foxconn will come at this.”

    Hanna Ziady contributed to this report.

    [ad_2]

    Source link

  • An author says AI is ‘writing’ unauthorized books being sold under her name on Amazon | CNN Business

    An author says AI is ‘writing’ unauthorized books being sold under her name on Amazon | CNN Business

    [ad_1]


    New York
    CNN
     — 

    An author is raising alarms this week after she found new books being sold on Amazon under her name — only she didn’t write them; they appear to have been generated by artificial intelligence.

    Jane Friedman, who has authored multiple books and consulted about working in the writing and publishing industry, told CNN that an eagle-eyed reader looking for more of her work bought one of the fake titles on Amazon. The books had titles similar to the subjects she typically writes about, but the text read as if someone had used a generative AI model to imitate her style.

    “When I started looking at these books, looking at the opening pages, looking at the bio, it was just obvious to me that it had been mostly, if not entirely, AI-generated … I have so much content available online for free, because I’ve been blogging forever, so it wouldn’t be hard to get an AI to mimic me” Friedman said.

    With AI tools like ChatGPT now able to rapidly and cheaply pump out huge volumes of convincing text, some writers and authors have raised alarms about losing work to the new technology. Others have said they don’t want their work being used to train AI models, which could then be used to imitate them.

    “Generative AI is being used to replace writers — taking their work without permission, incorporating those works into the fabric of those AI models and then offering those AI models to the public, to other companies, to use to replace writers,” Mary Rasenberger, CEO of the nonprofit authors advocacy group the Authors Guild, told CNN. “So you can imagine writers are a little upset about that.”

    Last month, US lawmakers met with members of creative industries, including the Authors Guild, to discuss the implications of artificial intelligence. In a Senate subcommittee hearing, Rasenberger called for the creation of legislation to protect writers from AI, including rules that would require AI companies to be transparent about how they train their models. More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.

    Friedman on Monday posted a well-read thread on X, formerly known as Twitter, and a blog post about the issue. Several authors responded saying they’d had similar experiences.

    “People keep telling me they bought my newest book — that has my name on it but I didn’t write,” one author said in response.

    Amazon removed the fake books being sold under Friedman’s name and said its policies prohibit such imitation.

    “We have clear content guidelines governing which books can be listed for sale and promptly investigate any book when a concern is raised,” Amazon spokesperson Ashley Vanicek said in a statement, adding that the company accepts author feedback about potential issues. “We invest heavily to provide a trustworthy shopping experience and protect customers and authors from misuse of our service.”

    Amazon also told Friedman that it is “investigating what happened with the handling of your claims to drive improvements to our processes,” according to an email viewed by CNN.

    The fake books using Friedman’s name were also added to her profile on the literary social network Goodreads, and removed only after she publicized the issue.

    “We have clear guidelines on which books are included on Goodreads and will quickly investigate when a concern is raised, removing books when we need to,” Goodreads spokesperson Suzanne Skyvara said in a statement to CNN.

    Friedman said she worries that authors will be stuck playing whack-a-mole to identify AI generated fakes.

    “What’s frightening is that this can happen to anyone with a name that has reputation, status, demand that someone sees a way to profit off of,” she said.

    The Authors Guild has been working with Amazon since this past winter to address the issue of books written by AI, Rasenberger said.

    She said the company has been responsive when the Authors Guild flags fake books on behalf of authors, but it can be a tricky issue to spot given that it’s possible for two legitimate authors to have the same name.

    The group is also hoping AI companies will agree to allow authors to opt out of having their work used to train AI models — so it’s harder to create copycats — and to find ways to transparently label artificially generated text. And, she said, companies and publishers should continue investing in creative work made by humans, even if AI appears more convenient.

    “Using AI to generate content is so easy, it’s so cheap, that I do worry there’s going to be this kind of downward competition to use AI to replace human creators,” she said. “And you will never get the same quality with AI as human creators.”

    [ad_2]

    Source link

  • US judge set to decertify Google Play class action | CNN Business

    US judge set to decertify Google Play class action | CNN Business

    [ad_1]

    A US judge plans to free Google from having to defend against a class action by 21 million consumers who claimed it violated federal antitrust law by overcharging them in its Google Play app store.

    Monday’s decision by US District Judge James Donato in San Francisco could significantly reduce damages that Google, a unit of Alphabet, might owe over the distribution of Android mobile applications.

    Consumers claimed they would have paid less for apps and enjoyed expanded choice but for Google’s alleged monopoly. Google has denied wrongdoing.

    Donato said his Nov. 2022 class certification order should be thrown out because his decision, also announced Monday, not to let an economist testify as an expert witness for the consumers eliminated an “essential element” of their argument for certification.

    The judge said he couldn’t decertify the class immediately because Google had been appealing his November order. He directed lawyers for Google and the consumers to try resolving that issue before a Sept. 7 hearing.

    The class action included consumers from 12 US states and five territories, who were not part of a similar case against Google brought by various state attorneys general.

    Class actions let plaintiffs sue as a group, and potentially obtain larger recoveries at lower cost than if they were forced to sue individually.

    Lawyers for the consumers did not immediately respond to requests for comment. Google and its lawyers did not immediately respond to similar requests.

    The case is part of wide-ranging antitrust litigation that includes 38 states and the District of Columbia, and companies including Epic Games and Match Group.

    The case is In re Google Play Store Antitrust Litigation, US District Court, Northern District of California, No. 21-md-02981.

    [ad_2]

    Source link

  • Google to require disclosures of AI content in political ads | CNN Business

    Google to require disclosures of AI content in political ads | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Starting in November, Google will require political advertisements to prominently disclose when they feature synthetic content — such as images generated by artificial intelligence — the tech giant announced this week.

    Political ads that feature synthetic content that “inauthentically represents real or realistic-looking people or events” must include a “clear and conspicuous” disclosure for viewers who might see the ad, Google said Wednesday in a blog post. The rule, an addition to the company’s political content policy that covers Google and YouTube, will apply to image, video and audio content.

    The policy update comes as campaign season for the 2024 US presidential election ramps up and as a number of countries around the world prepare for their own major elections the same year. At the same time, artificial intelligence technology has advanced rapidly, allowing anyone to cheaply and easily create convincing AI-generated text and, increasingly, audio and video. Digital information integrity experts have raised alarms that these new AI tools could lead to a wave of election misinformation that social media platforms and regulators may be ill-prepared to handle.

    AI-generated images have already begun to crop up in political advertisements. In June, a video posted to X by Florida Gov. Ron DeSantis’ presidential campaign used images that appeared to be generated by artificial intelligence showing former President Donald Trump hugging Dr. Anthony Fauci. The images, which appeared designed to criticize Trump for not firing the nation’s then-top infectious disease specialist, were tricky to spot: They were shown alongside real images of the pair and with a text overlay saying, “real life Trump.”

    The Republican National Committee in April released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery,” but some potential voters in Washington, DC, to whom CNN showed the video did not notice it on their first watch.

    In its policy update, Google said it will require disclosures on ads using synthetic content in a way that could mislead users. The company said, for example, that an “ad with synthetic content that makes it appear as if a person is saying or doing something they didn’t say or do” would need a label.

    Google said the policy will not apply to synthetic or altered content that is “inconsequential to the claims made in the ad,” including changes such as image resizing, color corrections or “background edits that do not create realistic depictions of actual events.”

    A group of top artificial intelligence companies, including Google, agreed in July to a set of voluntary commitments put forth by the Biden administration to help improve safety around their AI technologies. As part of that agreement, the companies said they would develop technical mechanisms, such as watermarks, to ensure users know when content was generated by AI.

    The Federal Election Commission has also been exploring how to regulate AI in political ads.

    [ad_2]

    Source link

  • France orders Apple to pull iPhone 12 off shelves for high radiation levels | CNN Business

    France orders Apple to pull iPhone 12 off shelves for high radiation levels | CNN Business

    [ad_1]



    CNN
     — 

    Apple is fighting France’s claims that the iPhone 12 surpasses European radiation exposure limits after French regulators on Tuesday ordered a pause on sales and a fix to phones already sold to customers.

    France’s National Frequency Agency said it “has demanded that Apple withdraw the iPhone 12 from the French market, effective 12 September 2023, as measures show the specific absorption rate exceeds the set limits.” The agency said the iPhone 12 is not compliant with European Union regulations.

    “Apple must immediately adopt all necessary measures to prevent the iPhone 12 in the supply chain from being made available on the market,” ANFR added.

    Disputing the agency’s claims, Apple said it had already given the agency multiple lab results conducted by the company and independent third parties that showed the device’s compliance with relevant SAR regulations and global standards.

    The company said it was contesting the AFNR’s review results and would continue to work with the agency to demonstrate the phone’s compliance.

    SAR is a measure of the rate of energy absorption by the body from the source being measured, according to the ANFR. But experts and regulators generally say not to worry.

    “To date, and after much research performed, no adverse health effect has been causally linked with exposure to wireless technologies,” according to the World Health Organization. “Provided that the overall exposure remains below international guidelines, no consequences for public health are anticipated.”

    ANFR ruled that for iPhone 12s already in use, Apple “must adopt all necessary corrective measures to bring the telephones into conformity as soon as possible, otherwise, Apple will have to recall the equipment.”

    The measure was effective from Tuesday, with the regulator adding it would ensure the product is no longer offered for sale in all distribution channels in France from that day.

    France’s Minister for the Digital Economy Jean-Noel Barrot confirmed in a tweet that iPhone 12 sales are “halted in France until Apple offers an update for all affected devices.”

    “The @anfr found that the iPhone 12 was emitting a level of waves slightly higher than the authorized threshold,” Barrot wrote in another tweet, translated from French. “This level is more than 10 times lower than the level at which there could be a health risk.”

    The announcement came as Apple unveiled the iPhone 15 and iPhone 15 Pro, Apple’s newest iteration of its iconic product, at its annual keynote event in California on Tuesday.

    [ad_2]

    Source link

  • ChatGPT can now hear, see and speak as OpenAI gives the chatbot its most humanlike update | CNN Business

    ChatGPT can now hear, see and speak as OpenAI gives the chatbot its most humanlike update | CNN Business

    [ad_1]



    CNN
     — 

    You can now speak aloud to ChatGPT and hear the artificial intelligence-powered chatbot talk back.

    OpenAI, the startup behind the wildly-popular chatbot, announced Monday that it is rolling out new features including the ability to let users engage in a back-and-forth voice conversation with ChatGPT.

    In a company blog post Monday, OpenAI teased how this new feature can be used to “request a bedtime story for your family, or settle a dinner table debate.”

    The new voice features from OpenAI carry similarities to those currently offered by Amazon’s Alexa or Apple’s Siri voice assistants.

    In a demo of the new update shared by OpenAI, a user asks ChatGPT to come up with a story about “the super-duper sunflower hedgehog named Larry.” The chatbot is able to narrate a story out loud with a human-sounding voice that can also respond to questions, such as, “What was his house like?” and “Who is his best friend?”

    ChatGPT’s voice capability is “powered by a new text-to-speech model, capable of generating human-like audio from just text and a few seconds of sample speech,” Open AI said in the blogpost. The company added that it collaborated with professional voice actors to create the five different voices that can be used to animate the chatbot.

    OpenAI also said on Monday that it’s rolling out a new feature that lets the bot respond to prompts featuring an image. For example, you can snap a picture of the contents of your fridge and ask ChatGPT to help you come up with a meal plan using the ingredients you have. Moreover, the company said you can ask the chatbot to focus on a specific part of an image with its “drawing tool” in the app.

    The new features roll out in the app within the next two weeks for paying subscribers of ChatGPT’s Plus and Enterprise services. (Subscriptions to the Plus service are $20 a month, and its Enterprise service is currently only offered to business clients).

    The updates from OpenAI come amid an ongoing AI arms race within the tech sector, initially spurred by the public launch of ChatGPT late last year. In recent weeks, tech giants have been racing to roll out new updates that incorporate more AI-powered tools directly into their core products. Google last week announced a series of updates to its ChatGPT competitor Bard. Also last week, Amazon said it was bringing a generative AI-powered update to its Alexa voice assistant.

    [ad_2]

    Source link

  • Did your cell phone make a screeching noise today? Here’s why | CNN Business

    Did your cell phone make a screeching noise today? Here’s why | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Today was the day for the US government’s big emergency alert drill, which sent a test message to every TV, radio and cell phone in the nation.

    Starting at approximately 2:20 pm ET on Wednesday, the federal government began conducting a nationwide test of its Emergency Alert System and Wireless Emergency Alerts. The EAS portion of the test sent an emergency alert to all radios and televisions, while the WEA portion of the drill sent an alert to all consumer cell phones.

    The test was being conducted by the Federal Emergency Management Agency in coordination with the Federal Communication Commission. Its purpose was to ensure that the systems in place continue to be an effective means of warning the public about emergencies at a national level.

    Essentially, what this means is that hundreds of millions of cell phones around the country made a screeching alert noise at approximately the same time today, beginning around 2:20 pm ET. Radio and TV stations also blared a test alert at around the same time. But there was no action required from you after receiving the free message — it was just a test.

    Here are answers to all of your burning questions about today’s emergency alert test.

    While some recent models of mobile phones may include a setting to opt-out of tests and alerts, none of these settings will affect the 2023 national test, FEMA has said.

    That means if your mobile phone was on and receiving service from a participating wireless provider, you will likely received the national Wireless Emergency Alert test, the agency added.

    There are, however, three conditions which would prevent the cell phone alert from getting delivered to a device. If your phone is turned off, has airplane mode switched on, or is not connected or associated with a cell tower, then it did not receive the message.

    Survivors of domestic violence and people in abusive relationships often have a secret or emergency phone that they don’t want their partner or others to know about. On a call with reporters Tuesday, a senior FEMA official said the agency was aware of these concerns stemming from survivors of domestic violence and their allies. The official recommended that people who do not want a secret phone to be revealed to turn their phone completely off ahead of the 2:20 pm ET test — and not to turn it back on for thirty minutes, or until after 2:50 pm ET.

    If you wanted to be cautious, you could also wait until you are in a safe place before turning your phone back on.

    Educators are braced themselves for some disruption this afternoon, as the test impacting cell phones occurred during school hours for most of the country.

    On the call with reporters, the senior FEMA official recommended that educators, as much as possible, try to use this as a teaching opportunity about federal emergency management and preparedness initiatives.

    The national test cannot be used to monitor, locate or lock your phone, FEMA has said. The test is also using broadcast technology and does not collect any of your data.

    All cell phones should have received an alert and an accompanying text message that reads: “THIS IS A TEST of the National Wireless Emergency Alert System. No action is needed.”

    The free text message was sent in either English or Spanish, depending on the language settings of your device. The text was accompanied by a unique tone and vibration that is meant to make the alert accessible to the entire public, including people with disabilities, FEMA has said.

    The test was broadcast by cell towers for approximately 30 minutes beginning at 2:20 pm ET, FEMA said. During this time, all compatible wireless phones that were switched on, within range of an active cell tower, and whose wireless providers participates in WEA tests should have received the text message.

    Although the test will be transmitting for approximately 30 minutes, you should only have received the alert message once.

    Meanwhile, all radios and televisions also broadcast a test emergency alert at the same time as part of the broader test. This message, which ran for approximately one minute, stated: “This is a nationwide test of the Emergency Alert System, issued by the Federal Emergency Management Agency, covering the United States from 14:20 to 14:50 hours ET. This is only a test. No action is required by the public.”

    Can the emergency alert impact my body?

    In short: No. There are a number of false claims circulating online with regard to the test alert, including some conspiracy theories that incorrectly allege the sound emitted as part of the national test can impact your body at the cellular level. This is false.

    “FEMA is not aware of any adverse health effects caused by the audio signal,” the agency has stated.

    And while this is a national test, it uses the same technology and infrastructure that state and local authorities rely on to send localized Amber Alerts or extreme weather warnings, a senior FEMA official emphasized to reporters on Tuesday. In a frequently asked question sheet released by FEMA ahead of Wednesday’s test, the agency stated: “The audio signal that will be used in the National Test is the same combination of audio tones that has been used since 1963 in the original Emergency Broadcast System.”

    If you have a mobile phone that was switched on, not on airplane mode, within range of an active cell tower and on a network where wireless providers participate in Wireless Emergency Alerts then you should have received the test message on Wednesday afternoon by 2:50 pm ET.

    If you are trying to figure out why you did not receive an alert when you should have, or have any other feedback on the test, members of the public can write to the email address: FEMA-National-Test@fema.dhs.gov.

    [ad_2]

    Source link

  • Apple continues its sweep to roll out USB-C to more devices | CNN Business

    Apple continues its sweep to roll out USB-C to more devices | CNN Business

    [ad_1]



    CNN
     — 

    Apple

    (AAPL)
    quietly announced its next-generation Pencil that works with iPads and now includes USB-C charging.

    The change comes nearly a month after Apple retired its Lightning charger, a milestone moment toward universal charging amid pressure from EU regulators.

    Like previous models, the third-generation Apple Pencil is intended for taking notes, sketching and marking up documents. It also supports the hover feature, which allows users to preview and switch between different tools and app controls, when used with a 12.9-inch iPad Pro 12.9-inch (6th generation) and 11-inch iPad Pro (4th generation). The price is $79, down $20 from the second-generation Apple Pencil and $50 less than the original.

    The biggest change to the latest model comes to the charging system, which is noteworthy not only because the company has been resistant to making the switch for years but because it’s about to make charging that much easier for its customers.

    At its iPhone 15 event in September, the company announced all of its next-generation smartphones and new AirPods Pro will launch with USB-C charging. Apple previously switched its iPads and MacBooks to USB-C charging, but the push to finally add it to iPhones came less than a year after the European Union voted to approve legislation to require smartphones, tablets, digital cameras, portable speakers and other small devices to support USB-C charging by 2024.

    The first-of-its-kind law aims to pare down the number of chargers and cables consumers must contend with when they purchase a new device and to allow users to mix and match devices and chargers even if they were produced by different manufacturers. In doing so, however, Apple will give up control of its wired charging ecosystem, and identifying good chargers from bad ones won’t be obvious to many consumers.

    Although Apple does not break out its Pencil sales numbers, David McQueen, a director at ABI Research, estimates about 42 million have been sold since it launched in 2015, considering 420 million iPads have been sold since then (assuming 10% or fewer of these consumers have bought an Apple Pencil).

    “I’d have to think it’d be this low because of its relatively high price, high-end use case, and the availability of much cheaper alternatives that are capable of working with iPad,” he said.

    [ad_2]

    Source link

  • ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business

    ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business

    [ad_1]



    CNN
     — 

    For months, Eveline Fröhlich, a visual artist based in Stuttgart, Germany, has been feeling “helpless” as she watched the rise of new artificial intelligence tools that threaten to put human artists out of work.

    Adding insult to injury is the fact that many of these AI models have been trained off of the work of human artists by quietly scraping images of their artwork from the internet without consent or compensation.

    “It all felt very doom and gloomy for me,” said Fröhlich, who makes a living selling prints and illustrating book and album covers.

    “We’ve never been asked if we’re okay with our pictures being used, ever,” she added. “It was just like, ‘This is mine now, it’s on the internet, I’m going to get to use it.’ Which is ridiculous.”

    Recently, however, she learned about a tool dubbed Glaze that was developed by computer scientists at the University of Chicago and thwarts the attempts of AI models to perceive a work of art via pixel-level tweaks that are largely imperceptible to the human eye.

    “It gave us some way to fight back,” Fröhlich told CNN of Glaze’s public release. “Up until that point, many of us felt so helpless with this situation, because there wasn’t really a good way to keep ourselves safe from it, so that was really the first thing that made me personally aware that: Yes, there is a point in pushing back.”

    Fröhlich is one of a growing number of artists that is fighting back against AI’s overreach and trying to find ways to protect her images online as a new spate of tools has made it easier than ever for people to manipulate images in ways that can sow chaos or upend the livelihoods of artists.

    These powerful new tools allow users to create convincing images in just seconds by inputting simple prompts and letting generative AI do the rest. A user, for example, can ask an AI tool to create a photo of the Pope dripped out in a Balenciaga jacket — and go on to fool the internet before the truth comes out that the image is fake. Generative AI technology has also wowed users with its ability to spit out works of art in the style of a specific artist. You can, for example, create a portrait of your cat that looks like it was done with the bold brushstrokes of Vincent Van Gogh.

    But these tools also make it very easy for bad actors to steal images from your social media accounts and turn them into something they’re not (in the worst cases, this could manifest as deepfake porn that uses your likeness without your consent). And for visual artists, these tools threaten to put them out of work as AI models learn how to mimic their unique styles and generate works of art without them.

    Some researchers, however, are now fighting back and developing new ways to protect people’s photos and images from AI’s grasp.

    Ben Zhao, a professor of computer science at University of Chicago and one of the lead researchers on the Glaze project, told CNN that the tool aims to protect artists from having their unique works used to train AI models.

    Glaze uses machine-learning algorithms to essentially put an invisible cloak on artworks that will thwart AI models’ attempts to understand the images. For example, an artist can upload an image of their own oil painting that has been run through Glaze. AI models might read that painting as something like a charcoal drawing — even if humans can clearly tell that it is an oil painting.

    Artists can now take a digital image of their artwork, run it through Glaze, “and afterwards be confident that this piece of artwork will now look dramatically different to an AI model than it does to a human,” Zhao told CNN.

    Zhao’s team released the first prototype of Glaze in March and has already surpassed a million downloads of the tool, he told CNN. Just last week, his team released a free online version of the tool as well.

    Jon Lam, an artist based in California, told CNN that he now uses Glaze for all of the images of his artwork that he shares online.

    Lam said that artists like himself have for years posted the highest resolution of their works on the internet as a point of pride. “We want everyone to see how awesome it is and see all the details,” he said. But they had no idea that their works could be gobbled up by AI models that then copy their styles and put them out of work.

    Jon Lam is a visual artist from California who uses the Glaze tool to help protect his artwork online from being used to train AI models.

    “We know that people are taking our high-resolution work and they are feeding it into machines that are competing in the same space that we are working in,” he told CNN. “So now we have to be a little bit more cautious and start thinking about ways to protect ourselves.”

    While Glaze can help ameliorate some of the issues artists are facing for now, Lam says it’s not enough and there needs to be regulation set regarding how tech companies can take data from the internet for AI training.

    “Right now, we’re seeing artists kind of being the canary in the coal mine,” Lam said. “But it’s really going to affect every industry.”

    And Zhao, the computer scientist, agrees.

    Since releasing Glaze, the amount of outreach his team has received from artists in other disciplines has been “overwhelming,” he said. Voice actors, fiction writers, musicians, journalists and beyond have all reached out to his team, Zhao said, inquiring about a version of Glaze for their field.

    “Entire, multiple, human creative industries are under threat to be replaced by automated machines,” he said.

    While the rise of AI images are threatening the jobs of artists around the world, everyday internet users are also at risk of their photos being manipulated by AI in other ways.

    “We are in the era of deepfakes,” Hadi Salman, a researcher at the Massachusetts Institute of Technology, told CNN amid the proliferation of AI tools. “Anyone can now manipulate images and videos to make people actually do something that they are not doing.”

    Salman and his team at MIT released a research paper last week that unveiled another tool aimed at protecting images from AI. The prototype, dubbed PhotoGuard, puts an invisible “immunization” over images that stops AI models from being able to manipulate the picture.

    The aim of PhotoGuard is to protect photos that people upload online from “malicious manipulation by AI models,” Salman said.

    Salman explained that PhotoGuard works by adjusting an image’s pixels in a way that is imperceptible to humans.

    In this demonstration released by MIT, a researcher shows a selfie (left) he took with comedian Trevor Noah. The middle photo, an AI-generated fake image, shows how the image looks after he used an AI model to generate a realistic edit of the pair wearing suits. The right image depicts how the researchers' tool, PhotoGuard, would prevent an attempt by AI models from editing the photo.

    “But this imperceptible change is strong enough and it’s carefully crafted such that it actually breaks any attempts to manipulate this image by these AI models,” he added.

    This means that if someone tries to edit the photo with AI models after it’s been immunized by PhotoGuard, the results will be “not realistic at all,” according to Salman.

    In an example he shared with CNN, Salman showed a selfie he took with comedian Trevor Noah. Using an AI tool, Salman was able to edit the photo to convincingly make it look like he and Noah were actually wearing suits and ties in the picture. But when he tries to make the same edits to a photo that has been immunized by PhotoGuard, the resulting image depicts Salman and Noah’s floating heads on an array of gray pixels.

    PhotoGuard is still a prototype, Salman notes, and there are ways people can try to work around the immunization via various tricks. But he said he hopes that with more engineering efforts, the prototype can be turned into a larger product that can be used to protect images.

    While generative AI tools “allow us to do amazing stuff, it comes with huge risks,” Salman said. It’s good people are becoming more aware of these risks, he added, but it’s also important to take action to address them.

    Not doing anything, “Might actually lead to much more serious things than we imagine right now,” he said.

    [ad_2]

    Source link

  • What to expect from Apple’s iPhone 15 reveal | CNN Business

    What to expect from Apple’s iPhone 15 reveal | CNN Business

    [ad_1]



    CNN
     — 

    Apple is expected to debut its iPhone 15 lineup Tuesday at the company’s annual September keynote event, and it could introduce the biggest change to the phone’s design in 11 years.

    The press event, which Apple teased with a “wonderlust” tagline, will take place at the company’s headquarters in Cupertino, California, and will be livestreamed on its website, starting at 10 a.m. local time.

    Although the annual iPhone event has become formulaic over the years, announcing incremental changes to battery life, camera system and displays, this year Apple is expected to introduce USB-C charging to its smartphones for the first time. The change could ultimately streamline the charging process across various devices and brands.

    But the company will have to show off more than just a new charging system to get users to upgrade. Last month, Apple’s sales fell for the third consecutive quarter. iPhone revenue came in at $39.7 billion for the quarter, marking an approximately 2% year-over-year decline.

    Here’s more of what to expect:

    Apple has previously switched its iPads and MacBooks to USB-C charging, but now may be the time for the company to finally make the change on iPhones. The move would come less than a year after the European Union voted to approve legislation to require smartphones, tablets, digital cameras, portable speakers and other small devices to support USB-C charging by 2024. The first-of-its-kind law aims to pare down the number of chargers and cables consumers must contend with when they purchase a new device and to allow users to mix and match devices and chargers even if they were produced by different manufacturers.

    “This is arguably the biggest disruption to iPhone design for several years, but in reality, it is hardly a dramatic move,” said Ben Wood, an analyst at CCS Insight.

    Last year, Apple’s senior vice president of worldwide marketing, Greg Joswiak, publicly stressed the value and ubiquity of the Lightning charger, which is designed for faster device charging, but noted “obviously we will have to comply” with the EU mandate. The Lightning charger was introduced in 2012.

    The change to USB-C would also likely usher in a wave of charging accessories, potentially in various colors. It’s possible iPhone users will also pay up for a USB-C wall adapter because it will be a different size connector.

    The entire iPhone 15 lineup is rumored to get the “Dynamic Island” feature — an interactive home for alerts, notifications and various controls — that replaces the notch on top of the screen. The tool launched on the higher-end iPhone 14 Pro models last year.

    Although there are few other rumors circulating about its entry-level iPhone models, the iPhone 15 Pro and 15 Pro Max models are expected to get a handful of new features, according to a Bloomberg report. This may include a rear-facing periscope lens, which allows for more optical zoom, and a titanium casing to make the device up to 15% lighter and thinner. The Pro models are also expected to get Apple’s latest A17 chip – the first with 3 nanometer technology, which could deliver faster processing and a longer-lasting battery.

    The lineup is also expected to come in various new colors, as hinted at in the Apple logo featured in the event’s invitation, including navy and updated shades of gray, white and silver.

    In June, Apple introduced the Vision Pro, a mixed reality headset that the company says will usher in a new era of “spatial computing.” Yoram Wurmser, an analyst at Insider Intelligence, believes the company will tease “some new features and deeper collaborations” to drum up excitement ahead of its 2024 launch. (It’s also possible Apple could announce a launch date). The headset blends both virtual reality and augmented reality, a technology that overlays virtual images on live video of the real world. The headset is Apple’s biggest, and riskiest, product launch in years.

    New AirPods, Apple Watches and software release dates

    The company typically unveils its latest Apple Watches alongside the iPhone each year, so it’s likely we’ll see the debut of the Apple Watch Series 9 and possibly its next-generation Ultra 2 smartwatch, its more rugged wearable for serious sports enthusiasts. According to Bloomberg, Apple is working on a full revamp of its smartwatch for next year’s Apple Watch Series 10, so this year’s updates will be relatively minor.

    In addition, Apple is expected to show off its next-generation AirPods with a new charging case that will work with USB-C cables. It’s also likely to announce launch dates for its next-generation operating systems for the iPhone, iPads, Mac computers and Apple Watch.

    In May, for example, Apple showed off a slew of new tools coming to iOS 17, such as a more accurate autocorrect, a new feature called Live Voicemail that will transcribe a caller’s message in real time, and a NameDrop tool that lets users share their contact information by holding two iPhones close together. The iPhone’s phone app will also reposition the hang up button to the bottom right of the screen, next to other functions.

    With the new iPhone expected to take center stage, many analysts don’t anticipate Apple will release new iPads or Mac computers until October. And despite rivals Samsung and Google doubling down on foldable devices, Apple is still not expected to unveil a similar version this fall.

    [ad_2]

    Source link

  • Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Coming out of a three-hour Senate hearing on artificial intelligence, Elon Musk, the head of a handful of tech companies, summarized the grave risks of AI.

    “There’s some chance – above zero – that AI will kill us all. I think it’s low but there’s some chance,” Musk told reporters. “The consequences of getting AI wrong are severe.”

    But he also said the meeting “may go down in history as being very important for the future of civilization.”

    The session organized by Senate Majority Leader Chuck Schumer brought high-profile tech CEOs, civil society leaders and more than 60 senators together. The first of nine sessions aims to develop consensus as the Senate prepares to draft legislation to regulate the fast-moving artificial intelligence industry. The group included CEOs of Meta, Google, OpenAI, Nvidia and IBM.

    All the attendees raised their hands — indicating “yes” — when asked whether the federal government should oversee AI, Schumer told reporters Wednesday afternoon. But consensus on what that role should be and specifics on legislation remained elusive, according to attendees. 

    Benefits and risks

    Bill Gates spoke of AI’s potential to feed the hungry and one unnamed attendee called for spending tens of billions on “transformational innovation” that could unlock AI’s benefits, Schumer said.

    The challenge for Congress is to promote those benefits while mitigating the societal risks of AI, which include the potential for technology-based discrimination, threats to national security and even, as X owner Musk said, “civilizational risk.”

    “You want to be able to maximize the benefits and minimize the harm,” said Schumer, who organized the first of nine sessions. “And that will be our difficult job.”

    Senators emerging from the meeting said they heard a broad range of perspectives, with representatives from labor unions raising the issue of job displacement and civil rights leaders highlighting the need for an inclusive legislative process that provides the least powerful in society a voice.

    Most agreed that AI could not be left to its own devices, said Washington Democratic Sen. Maria Cantwell.

    “I thought Satya Nadella from Microsoft said it best: ‘When it comes to AI, we shouldn’t be thinking about autopilot. You need to have copilots.’ So who’s going to be watching this activity and making sure that it’s done correctly?”

    Other areas of agreement reflected traditional tech industry priorities, such as increasing federal investment in research and development as well as promoting skilled immigration and education, Cantwell added.

    But there was a noticeable lack of engagement on some of the harder questions, she said, particularly on whether a new federal agency is needed to regulate AI.

    “There was no discussion of that,” she said, though several in the meeting raised the possibility of assigning some greater oversight responsibilities to the National Institute of Standards and Technology, a Commerce Department agency.

    Musk told journalists after the event that he thinks a standalone agency to regulate AI is likely at some point.

    “With AI we can’t be like ostriches sticking our heads in the sand,” Schumer said, according to prepared remarks acquired by CNN. He also noted this is “a conversation never before seen in Congress.”

    The push reflects policymakers’ growing awareness of how artificial intelligence, and particularly the type of generative AI popularized by tools such as ChatGPT, could potentially disrupt business and everyday life in numerous ways — ranging from increasing commercial productivity to threatening jobs, national security and intellectual property.

    The high-profile guests trickled in shortly before 10 a.m., with Meta CEO Mark Zuckerberg pausing to chat with Nvidia CEO Jensen Huang outside the Senate Russell office building’s Kennedy Caucus Room. Google CEO Sundar Pichai was seen huddling with Delaware Democratic Sen. Chris Coons, while X owner Musk quickly swept by a mass of cameras with a quick wave to the crowd. Inside, Musk was seated at the opposite end of the room from Zuckerberg, in what is likely the first time that the two men have shared a room since they began challenging each other to a cage fight months ago.

    Elon Musk, CEO of X, the company formerly known as Twitter, left, and Alex Karp, CEO of the software firm Palantir Technologies, take their seats as Senate Majority Leader Chuck Schumer, D, N.Y., convenes a closed-door gathering of leading tech CEOs to discuss the priorities and risks surrounding artificial intelligence and how it should be regulated, at the Capitol in Washington, Wednesday, Sept. 13, 2023.

    The session at the US Capitol in Washington also gave the tech industry its most significant opportunity yet to influence how lawmakers design the rules that could govern AI.

    Some companies, including Google, IBM, Microsoft and OpenAI, have already offered their own in-depth proposals in white papers and blog posts that describe layers of oversight, testing and transparency.

    IBM’s CEO, Arvind Krishna, argued in the meeting that US policy should regulate risky uses of AI, as opposed to just the algorithms themselves.

    “Regulation must account for the context in which AI is deployed,” he said, according to his prepared remarks.

    Executives such as OpenAI CEO Sam Altman previously wowed some senators by publicly calling for new rules early in the industry’s lifecycle, which some lawmakers see as a welcome contrast to the social media industry that has resisted regulation.

    Clement Delangue, co-founder and CEO of the AI company Hugging Face, tweeted last month that Schumer’s guest list “might not be the most representative and inclusive,” but that he would try “to share insights from a broad range of community members, especially on topics of openness, transparency, inclusiveness and distribution of power.”

    Civil society groups have voiced concerns about AI’s possible dangers, such as the risk that poorly trained algorithms may inadvertently discriminate against minorities, or that they could ingest the copyrighted works of writers and artists without compensation or permission. Some authors have sued OpenAI over those claims, while others have asked in an open letter to be paid by AI companies.

    News publishers such as CNN, The New York Times and Disney are some of the content producers who have blocked ChatGPT from using their content. (OpenAI has said exemptions such as fair use apply to its training of large language models.)

    “We will push hard to make sure it’s a truly democratic process with full voice and transparency and accountability and balance,” said Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights, “and that we get to something that actually supports democracy; supports economic mobility; supports education; and innovates in all the best ways and ensures that this protects consumers and people at the front end — and just not try to fix it after they’ve been harmed.”

    The concerns reflect what Wiley described as “a fundamental disagreement” with tech companies over how social media platforms handle misinformation, disinformation and speech that is either hateful or incites violence.

    American Federation of Teachers President Randi Weingarten said America can’t make the same mistake with AI that it did with social media. “We failed to act after social media’s damaging impact on kids’ mental health became clear,” she said in a statement. “AI needs to supplement, not supplant, educators, and special care must be taken to prevent harm to students.”

    Navigating those diverse interests will be Schumer, who along with three other senators — South Dakota Republican Sen. Mike Rounds, New Mexico Democratic Sen. Martin Heinrich and Indiana Republican Sen. Todd Young — is leading the Senate’s approach to AI. Earlier this summer, Schumer held three informational sessions for senators to get up to speed on the technology, including one classified briefing featuring presentations by US national security officials.

    Wednesday’s meeting with tech executives and nonprofits marked the next stage of lawmakers’ education on the issue before they get to work developing policy proposals. In announcing the series in June, Schumer emphasized the need for a careful, deliberate approach and acknowledged that “in many ways, we’re starting from scratch.”

    “AI is unlike anything Congress has dealt with before,” he said, noting the topic is different from labor, healthcare or defense. “Experts aren’t even sure which questions policymakers should be asking.”

    Rounds said hammering out the specific scope of regulations will fall to Senate committees. Schumer added that the goal — after hosting more sessions — is to craft legislation over “months, not years.”

    “We’re not ready to write the regs today. We’re not there,” Rounds said. “That’s what this is all about.”

    A smattering of AI bills have already emerged on Capitol Hill and seek to rein in the industry in various ways, but Schumer’s push represents a higher-level effort to coordinate Congress’s legislative agenda on the issue.

    New AI legislation could also serve as a potential backstop to voluntary commitments that some AI companies made to the Biden administration earlier this year to ensure their AI models undergo outside testing before they are released to the public.

    But even as US lawmakers prepare to legislate by meeting with industry and civil society groups, they are already months if not years behind the European Union, which is expected to finalize a sweeping AI law by year’s end that could ban the use of AI for predictive policing and restrict how it can be used in other contexts.

    A bipartisan pair of US senators sharply criticized the meeting, saying the process is unlikely to produce results and does not do enough to address the societal risks of AI.

    Connecticut Democratic Sen. Richard Blumenthal and Missouri Republican Sen. Josh Hawley each spoke to reporters on the sidelines of the meeting. The two lawmakers recently introduced a legislative framework for artificial intelligence that they said represents a concrete effort to regulate AI — in contrast to what was happening steps away behind closed doors.

    “This forum is not designed to produce legislation,” Blumenthal said. “Our subcommittee will produce legislation.”

    Blumenthal added that the proposed framework — which calls for setting up a new independent AI oversight body, as well as a licensing regime for AI development and the ability for people to sue companies over AI-driven harms — could lead to a draft bill by the end of the year.

    “We need to do what has been done for airline safety, car safety, drug safety, medical device safety,” Blumenthal said. “AI safety is no different — in fact, potentially even more dangerous.”

    Hawley called Wednesday’s sessions “a giant cocktail party” for the tech industry and slammed the fact that it was private.

    “I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money, and then close it to the public,” Hawley said. “I mean, that’s a terrible idea. These are the same people who have ruined social media.”

    Despite talking tough on tech, Schumer has moved extremely slowly on tech legislation, Hawley said, pointing to several major tech bills from the last Congress that never made it to a Senate floor vote.

    “It’s a little bit like antitrust the last two years,” Hawley said. “He talks about it constantly and does nothing about it. My sense is … this is a lot of song and dance that covers the fact that actually nothing is advancing. I hope I’m wrong about that.”

    Hawley is also a co-sponsor of a bill introduced Tuesday led by Minnesota Democratic Sen. Amy Klobuchar that would prohibit generative AI from being used to create deceptive political ads. Klobuchar and Hawley, along with fellow co-sponsors Coons and Maine Republican Sen. Susan Collins, said the measure is needed to keep AI from manipulating voters.

    Massachusetts Democratic Sen. Elizabeth Warren said the broad nature of the summit limited its potential.

    “They’re sitting at a big, round table all by themselves,” Warren said of the executives and civil society leaders, while all the senators sat, listened and didn’t ask questions. “Let’s put something real on the table instead of everybody agree[ing] that we need safety and innovation.”

    Schumer said that making the meeting confidential was intended to give lawmakers the chance to hear from the outside in an “unvarnished way.”

    [ad_2]

    Source link

  • Amazon invests up to $4 billion in Anthropic AI in exchange for minority stake and further AWS integration | CNN Business

    Amazon invests up to $4 billion in Anthropic AI in exchange for minority stake and further AWS integration | CNN Business

    [ad_1]



    CNN
     — 

    Amazon said on Monday that it’s investing up to $4 billion into the artificial intelligence company Anthropic in exchange for partial ownership and Anthropic’s greater use of Amazon Web Services (AWS), the e-commerce giant’s cloud computing platform.

    The deepening partnership between the two companies highlights how some large tech firms with massive cloud computing resources are increasingly leveraging those assets to gain a bigger foothold in AI.

    As part of the deal, AWS will become the “primary” cloud provider for Anthropic, with the AI company using Amazon’s cloud platform to do “the majority” of its AI model development and research into AI safety, the companies said. That will include using Amazon’s suite of in-house AI chips.

    Anthropic also made a “long-term commitment” to offer its AI models to AWS customers, Amazon said, and promised to give AWS users early access to features such as the ability to adapt Anthropic models for specific use cases.

    “With today’s announcement, customers will have early access to features for customizing Anthropic models, using their own proprietary data to create their own private models, and will be able to utilize fine-tuning capabilities via a self-service feature,” Amazon said in a release.

    Anthropic already offers its models to AWS users through Amazon Bedrock, Amazon’s one-stop shop for AI products. Bedrock also provides access to models from other providers including Stability AI and AI21 Labs, along with proprietary models developed by Amazon itself.

    In a release, Anthropic said that Amazon’s minority stake would not change its corporate governance structure nor its commitments to developing AI responsibly.

    “We will conduct pre-deployment tests of new models to help us manage the risks of increasingly capable AI systems,” Anthropic said.

    Amazon and Anthropic both made commitments to the Biden administration this year to conduct external audits of its AI systems before releasing them to the public.

    Amazon’s investment in Anthropic follows similar moves by cloud leaders such as Microsoft. In 2019, Microsoft invested $1 billion in ChatGPT-maker OpenAI. More recently, Microsoft made a $10 billion investment in OpenAI this year and launched a push to bring OpenAI’s technology into consumer-facing Microsoft products, such as Bing.

    [ad_2]

    Source link

  • Google unveils Pixel 8 built for ‘the generative AI era’ | CNN Business

    Google unveils Pixel 8 built for ‘the generative AI era’ | CNN Business

    [ad_1]



    CNN
     — 

    There’s nothing particularly new about Google’s latest-generation Pixel 8 smartphone hardware. That’s why the company is pushing hard to tout its AI-powered new software, which Google says was built specifically for the “first phone of the generative AI era.”

    At a press event in New York City, Google

    (GOOG)
    showed off the new Pixel 8 and Pixel 8 Pro devices, which largely look the same as the year prior, albeit with more rounded edges. But inside, its new G3 Tensor chip unlocks an AI-powered world aimed at simplifying your life, from asking the device to summarize news articles and websites to using Google

    (GOOG)
    Assistant to field phone calls and tweaking photos to move or resize objects.

    The 6.3-inch Pixel 8 and the 6.7-inch Pixel 8 Pro comes with a brighter display, new camera system and longer-lasting battery life. The Pixel 8 is available in three colors – hazel, rose and obsidian – and starts at $699, about $100 less than the baseline iPhone 14 with the same amount of storage. (That’s about $100 more than last year’s Pixel 7).

    Meanwhile, the Pixel 8 Pro – which touts a polished aluminum frame and a matte back glass this year – now has the ability to take better low-light photos and sharper selfies. It starts at $999 – the same price as the iPhone 15 Pro – and is available in three colors: bay, porcelain and obsidian.

    Although these upgrades are mostly incremental, the AI enhancements and related features may appeal to tech enthusiasts who want the latest version of Android and an alternative to Apple or Samsung smartphones.

    At the same time, Google’s Pixel line remains a niche product. Its global market share for smartphones remains about 1%, according to data from ABI Research. Google also limits sales to only a handful of countries, so keeping the volume low has been strategic as Google remains predominantly a software company with many partners running Android.

    Reece Hayden, an analyst at ABI Research, said Google is looking to establish itself as an early market leader amid the “generative AI-related hysteria,” which kicked into high gear late last year with the introduction of ChatGPT. Generative AI refers to a type of artificial intelligence that can create new content, such as text and images, in response to user prompts.

    “[Adding it to the Pixel] creates further product differentiation by leveraging internal capabilities that Apple may not have,” said Hayden.

    He expects this announcement to be the first of many similar efforts coming to hardware over the next year, especially among brands who’ve already made investments in this area.

    Here’s a closer look at what Google announced and some of the standout new AI features:

    A Google employee demonstrates manual focus features of the new Google Pixel 8 Pro Phone in New York City, U.S., October 4, 2023.

    Google showed off a handful of photo features coming to its Pixel line, including Magic Editor which uses generative AI to reposition and resize a subject. Similarly, a new Audio Magic Eraser tool that lets users erase distracting sounds from videos.

    Another tool called Best Take snaps a series of photos and then aggregates the faces into one shot so everyone looks their best. And a a new Zoom enhanced feature lets users pinch to zoom in about 30 times after a photo is taken to focus in on and edit a specific area.

    The company said these efforts aim to “let you capture every moment just how you want to remember it.”

    Although the tools intend to give users more control over their photos, some analysts like Thomas Husson at market research firm Forrester believe it will be harder to distinguish between what’s real and what’s not.

    “The fact that Google refers to a ‘Magic Eraser’ will blur the distinction between real photos and heavily edited ones,” Husson said. But he warns an uptick in deepfake apps already makes it hard to decipher the authenticity of some shots. “You don’t really need Google AI for that.”

    The company said Google Assistant will now sound more realistic when it engages with callers. Google’s screen call tool already lets Assistant field incoming calls, speak to callers and determine who’s on the line before pushing it through to the user. But its robotic voice will sound increasing more natural, the company said.

    Google is also bringing the capabilities of its Bard AI chatbot to Google Assistant, so it will be able to do more than set an alarm or tell the weather. With its new generative AI capabilities, it will be able to review important emails in a user’s inbox or reveal more about a hotel that popped up on their Instagram feed. Assistant will also be able to understand user questions in voice, text and images.

    “With generative AI on the scene, it’s really creating a lot of new opportunities to build an even more intuitive and intelligent and personalized digital assistant,” Sissie Hsiao, general manager for Google Assistant and Bard, told CNN.

    In addition to making Assistant more useful, the tool will make it easier for more users to interact with Google’s six-month-old Bard on interfaces they may already frequently engage with. Last month, Google rolled out a major expansion of Bard, allowing users to link the tool to their Gmail and other Google Workspace tools and making it easier to fact check the AI’s responses.

    Google launched Assistant with Bard to a small test group on Wednesday, and it will be more widely available to Android and iOS users in the coming months.

    AI is also getting smarter on the Pixel Watch 2 ($349), its second-generation smartwatch. Users can use Bard capabilities via an upgraded Google Assistant watch app to ask it how they slept and get other health insights.

    In addition, the Pixel 2 features a new heart rate sensor, which works alongside a new AI-driven heart rate algorithm, to provide a more accurate heart rate reading than before. But Hayden said he doesn’t think more AI will add too much more to its existing value proposition.

    “Smart watches already include a fair amount of AI, and Pixel is no different,” he said.

    [ad_2]

    Source link

  • Medical imaging struggles to read dark skin. Researchers say they’ve found a way to make it easier | CNN Business

    Medical imaging struggles to read dark skin. Researchers say they’ve found a way to make it easier | CNN Business

    [ad_1]



    CNN
     — 

    Traditional medical imaging – used to diagnose, monitor or treat certain medical conditions – has long struggled to get clear pictures of patients with dark skin, according to experts.

    Researchers say they have found a way to improve medical imaging, a process through which physicians can observe the inside of the body, regardless of skin tone.

    The new findings were published in the October edition of the journal Photoacoustics. The team tested the forearms of 18 volunteers, with skin tones ranging from light to dark. They found that a distortion of the photoacoustic signal that makes the imaging more difficult to read, called clutter, increased with darkness of skin.

    “When you have darker skin, you have more melanin. And melanin is actually one of the optical absorbers that we inherently have within our body,” Muyinatu Bell, an author of the study and director and founder of the Photoacoustic and Ultrasonics Systems Engineering (PULSE) Lab at JHU, told CNN. In other words, the amount of melanin content in the skin could be associated with more clutter.

    “The skin essentially acts as a transmitter of sound, but it’s not the same type of focused sound that we get and we want with ultrasound, it’s everywhere diffused and creates a lot of confusion,” Bell said. “And so, this scattering of the sound that’s caused by the melanin absorption is worse and worse with the higher melanin concentration.”

    The study – a collaboration with researchers in Brazil who had previously used one of Bell’s algorithms – found that signal-to-noise ratio, a scientific measure that compares signal with background noise, improved for all skin tones when the researchers used a technique called “short-lag spatial coherence beamforming” while performing medical imaging. That technique, originally used for ultrasounds, can be applied to photoacoustic imaging.

    The technique involves a combination of light and ultrasound technology, forming a new medical imaging modality, Theo Pavan, an author of the study and associate professor with the department of physics at University of São Paulo in Brazil, told CNN.

    “We really verified that it was much less sensitive to the skin color in terms of the quality of the image that you can get compared to the conventional methods that … is more commonly used by the community,” Pavan said.

    The study is “the first to objectively assess skin tone and to both qualitatively and quantitatively demonstrate that skin” photoacoustic signal “and clutter artifacts increase with epidermal melanin content,” the researchers wrote.

    The applications of photoacoustic technology vary, but with the researchers’ new developments, it may help diagnose health issues more accurately and equitably.

    “Right now, it’s increasing the application of the breast imaging,” and the next step would be to “increase the image quality overall,” said Guilherme Fernandes, an author of the study and a Ph.D. candidate in physics applied to medicine and biology at USP.

    The researchers’ work could also mean advancements for equity in health care at large.

    “In our scientific technology, there is a bias in terms of developing these products, for things that work well in lighter-skinned people,” said Dr. Camara Jones, a family physician, epidemiologist and former president of the American Public Health Association, who was not involved in the new study.

    “The biggest problem is that we use a thing we call race, as a risk factor — as a health risk factor. And so race is the social and interpretation of how people look in a race-conscious society. Race is not biology,” Jones explained. “We’ve mapped the human genome. We know there’s no basis in the human genome for racial sub-speciation.”

    This study isn’t the first to find skin color biases in medical technology. Medical equipment that leverages infrared sensing has also been found to not work as well on darker skin, since skin tone can interfere with the reflection of light.

    Many devices that were in frequent use during the Covid-19 pandemic, such as pulse oximeters and forehead thermometers, involve emitting and capturing light to make a measurement. But if that device isn’t calibrated for darker skin, the pigmentation could affect how the light is absorbed and how the infrared technology works.

    Bell said her research can hopefully pave a way to eliminating discrimination in health care and inspire others to develop technology that helps everyone, regardless of their skin tone.

    “I believe that with the ability to show that we can devise and develop technology — that doesn’t just work for one small subset of the population but works for a wider range of the population. This is very inspiring for not only my group, but for groups around the world to start thinking in this direction when designing technology. Does it serve the wider population?” Bell said.

    [ad_2]

    Source link

  • AI tools make things up a lot, and that’s a huge problem | CNN Business

    AI tools make things up a lot, and that’s a huge problem | CNN Business

    [ad_1]



    CNN
     — 

    Before artificial intelligence can take over the world, it has to solve one problem. The bots are hallucinating.

    AI-powered tools like ChatGPT have mesmerized us with their ability to produce authoritative, human-sounding responses to seemingly any prompt. But as more people turn to this buzzy technology for things like homework help, workplace research, or health inquiries, one of its biggest pitfalls is becoming increasingly apparent: AI models often just make things up.

    Researchers have come to refer to this tendency of AI models to spew inaccurate information as “hallucinations,” or even “confabulations,” as Meta’s AI chief said in a tweet. Some social media users, meanwhile, simply blast chatbots as “pathological liars.”

    But all of these descriptors stem from our all-too-human tendency to anthropomorphize the actions of machines, according to Suresh Venkatasubramanian, a professor at Brown University who helped co-author the White House’s Blueprint for an AI Bill of Rights.

    The reality, Venkatasubramanian said, is that large language models — the technology underpinning AI tools like ChatGPT — are simply trained to “produce a plausible sounding answer” to user prompts. “So, in that sense, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces,” he said. “There is no knowledge of truth there.”

    The AI researcher said that a better behavioral analogy than hallucinating or lying, which carries connotations of something being wrong or having ill-intent, would be comparing these computer outputs to the way his young son would tell stories at age four. “You only have to say, ‘And then what happened?’ and he would just continue producing more stories,” Venkatasubramanian said. “And he would just go on and on.”

    Companies behind AI chatbots have put some guardrails in place that aim to prevent the worst of these hallucinations. But despite the global hype around generative AI, many in the field remain torn about whether or not chatbot hallucinations are even a solvable problem

    Simply put, a hallucination refers to when an AI model “starts to make up stuff — stuff that is not in-line with reality,” according to Jevin West, a professor at the University of Washington and co-founder of its Center for an Informed Public.

    “But it does it with pure confidence,” West added, “and it does it with the same confidence that it would if you asked a very simple question like, ‘What’s the capital of the United States?’”

    This means that it can be hard for users to discern what’s true or not if they’re asking a chatbot something they don’t already know the answer to, West said.

    A number of high-profile hallucinations from AI tools have already made headlines. When Google first unveiled a demo of Bard, its highly anticipated competitor to ChatGPT, the tool very publicly came up with a wrong answer in response to a question about new discoveries made by the James Webb Space Telescope. (A Google spokesperson at the time told CNN that the incident “highlights the importance of a rigorous testing process,” and said the company was working to “make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.”)

    A veteran New York lawyer also landed in hot water when he used ChatGPT for legal research, and submitted a brief that included six “bogus” cases that the chatbot appears to have simply made up. News outlet CNET was also forced to issue corrections after an article generated by an AI tool ended up giving wildly inaccurate personal finance advice when it was asked to explain how compound interest works.

    Cracking down on AI hallucinations, however, could limit AI tools’ ability to help people with more creative endeavors — like users that are asking ChatGPT to write poetry or song lyrics.

    But there are risks stemming from hallucinations when people are turning to this technology to look for answers that could impact their health, their voting behavior, and other potentially sensitive topics, West told CNN.

    Venkatasubramanian added that at present, relying on these tools for any task where you need factual or reliable information that you cannot immediately verify yourself could be problematic. And there are other potential harms lurking as this technology spreads, he said, like companies using AI tools to summarize candidates’ qualifications and decide who should move ahead to the next round of a job interview.

    Venkatasubramanian said that ultimately, he thinks these tools “shouldn’t be used in places where people are going to be materially impacted. At least not yet.”

    How to prevent or fix AI hallucinations is a “point of active research,” Venkatasubramanian said, but at present is very complicated.

    Large language models are trained on gargantuan datasets, and there are multiple stages that go into how an AI model is trained to generate a response to a user prompt — some of that process being automatic, and some of the process influenced by human intervention.

    “These models are so complex, and so intricate,” Venkatasubramanian said, but because of this, “they’re also very fragile.” This means that very small changes in inputs can have “changes in the output that are quite dramatic.”

    “And that’s just the nature of the beast, if something is that sensitive and that complicated, that comes along with it,” he added. “Which means trying to identify the ways in which things can go awry is very hard, because there’s so many small things that can go wrong.”

    West, of the University of Washington, echoed his sentiments, saying, “The problem is, we can’t reverse-engineer hallucinations coming from these chatbots.”

    “It might just an intrinsic characteristic of these things that will always be there,” West said.

    Google’s Bard and OpenAI’s ChatGPT both attempt to be transparent with users from the get-go that the tools may produce inaccurate responses. And the companies have expressed that they’re working on solutions.

    Earlier this year, Google CEO Sundar Pichai said in an interview with CBS’ “60 Minutes” that “no one in the field has yet solved the hallucination problems,” and “all models have this as an issue.” On whether it was a solvable problem, Pichai said, “It’s a matter of intense debate. I think we’ll make progress.”

    And Sam Altman, CEO of ChatGPT-maker OpenAI, made a tech prediction by saying he thinks it will take a year-and-a-half or two years to “get the hallucination problem to a much, much better place,” during remarks in June at India’s Indraprastha Institute of Information Technology, Delhi. “There is a balance between creativity and perfect accuracy,” he added. “And the model will need to learn when you want one or the other.”

    In response to a follow-up question on using ChatGPT for research, however, the chief executive quipped: “I probably trust the answers that come out of ChatGPT the least of anybody on Earth.”

    [ad_2]

    Source link

  • Google’s antitrust showdown: What’s at stake for the internet search titan | CNN Business

    Google’s antitrust showdown: What’s at stake for the internet search titan | CNN Business

    [ad_1]



    CNN
     — 

    Google will face off in court Tuesday against government officials who have accused the company of antitrust violations in its massive search business, kicking off a long-anticipated legal showdown that could reshape one of the internet’s most dominant platforms.

    The trial beginning this week in Washington before a federal judge marks the culmination of two ongoing lawsuits against Google that started during the Trump administration. Legal experts describe the actions as the country’s biggest monopolization case since the US government took on Microsoft in the 1990s.

    In separate complaints, the Justice Department and dozens of states accused Google in 2020 of abusing its dominance in online search by allegedly harming competition through deals with wireless carriers and smartphone makers that made Google Search the default or exclusive option on products used by millions of consumers. The complaints eventually consolidated into a single case.

    Google has maintained that it competes on the merits and that consumers prefer its tools because they are the best, not because it has moved to illegally restrict competition. Google’s search business provides more than half of the $283 billion in revenue and $76 billion in net income Google’s parent company, Alphabet, recorded in 2022. Search has fueled the company’s growth to a more than $1.7 trillion market capitalization.

    Now, the company is set to defend itself in a multiweek trial that could upend the way Google distributes its search engine to users. The case is expected to feature testimony from high-profile witnesses including former employees of Google and Samsung, along with executives from Apple, including senior vice president Eddy Cue. It is the first case to go to trial in a series of court challenges targeting Google’s far-reaching economic power, testing the willingness of courts to clamp down on large tech platforms.

    “This is a backwards-looking case at a time of unprecedented innovation,” said Google President of Global Affairs Kent Walker, “including breakthroughs in AI, new apps and new services, all of which are creating more competition and more options for people than ever before. People don’t use Google because they have to — they use it because they want to. It’s easy to switch your default search engine — we’re long past the era of dial-up internet and CD-ROMs.”

    The trial may also be a bellwether for the more assertive antitrust agenda of the Biden administration.

    In its initial complaint, the US government alleged in part that Google pays billions of dollars a year to device manufacturers including Apple, LG, Motorola and Samsung — and browser developers like Mozilla and Opera — to be their default search engine and in many cases to prohibit them from dealing with Google’s competitors.

    As a result, the complaint alleges, “Google effectively owns or controls search distribution channels accounting for roughly 80 percent of the general search queries in the United States.”

    The lawsuit also alleges that Google’s Android operating system deals with device makers are anticompetitive, because they require smartphone companies to pre-install other Google-owned apps, such as Gmail, Chrome or Maps.

    At the time the lawsuit was first filed, US antitrust officials did not rule out the possibility of a Google breakup, warning that Google’s behavior could threaten future innovation or the rise of a Google successor.

    Separately, a group of states, led by Colorado, made additional allegations against Google, claiming that the way Google structures its search results page harms competition by prioritizing the company’s own apps and services over web pages, links, reviews and content from other third-party sites.

    But the judge overseeing the case, Judge Amit Mehta in the US District Court for the District of Columbia, tossed out those claims in a ruling last month, narrowing the scope of allegations Google must defend and saying the states had not done enough to show a trial was necessary to determine whether Google’s search results rankings were anticompetitive.

    Despite that ruling, the trial represents the US government’s furthest progress in challenging Google to date. Mehta has said Google’s pole position among search engines on browsers and smartphones “is a hotly disputed issue” and that the trial will determine “whether, as a matter of actual market reality, Google’s position as the default search engine across multiple browsers is a form of exclusionary Conduct.”

    In January, meanwhile, the Biden administration launched another antitrust suit against Google in opposition to the company’s advertising technology business, accusing it of maintaining an illegal monopoly. That case remains in its early stages at the US District Court for the Eastern District of Virginia.

    [ad_2]

    Source link

  • iOS 17 release: See what’s new in iPhone features | CNN Business

    iOS 17 release: See what’s new in iPhone features | CNN Business

    [ad_1]



    CNN
     — 

    iPhone users: Today’s the day to update to Apple’s latest operating system, iOS17, and unlock a slew of new features that promise to make the iPhone experience more personal and intuitive.

    Apple first teased iOS17 at its annual Worldwide Developer Conference in early June, but you may have missed out on some of the details as the tech giant also unveiled its much-anticipated mixed-reality Vision Pro headset that same day.

    iPhone users can update to iOS17 starting Monday by clicking on the Software Update section in the phone’s Settings app. Of course, many users have gotten in the habit of backing up important photos or files before downloading the latest software update – or waiting until the second version rolls out (likely in the coming weeks) if they’re afraid of any bugs that could come with the first version of a next-generation mobile operating system.

    Here are some of the buzziest and most-anticipated new features that iPhone users can expect from iOS17.

    Live Voicemail and FaceTime video messages are here

    One of the buzziest new features, dubbed Live Voicemail, will transcribe a caller’s message in real time, giving iPhone users the decision whether to ignore the call or take it on while the other person is still on the line and leaving their message.

    Unknown numbers will go directly to Live Voicemail when you have the “Silence Unknown Callers” setting turned on.

    Moreover, FaceTime will also now give users the ability to leave video messages if someone doesn’t pick up a video call.

    With iOS17, Facetime calls will also get more expressive – with reactions such as hearts, balloons, fireworks and more effects that can be activated through simple gestures.

    Another update that may require some getting used to is saying just “Siri” to activate Apple’s voice assistant, instead of “Hey Siri.”

    Dropping “Hey” from Siri’s launch-phrase is meant to create a more natural way to activate the assistant. Moreover, Siri will also be able to better process back-to-back requests once activated.

    For example, instead of asking: “Hey, Siri, how tall is Shaquille O’Neal?” and “Hey, Siri, how old is Shaquille O’Neal?” You should be able to just say: “Siri, how tall is Shaquille O’Neal?” Followed by: “How old is he?”

    The new NameDrop feature in iOS17 makes it easier than ever to exchange contact information with a new friend. iPhone users can simply bring their iPhones close to each other, as they would when AirDropping something, to share names and Contact Posters.

    The Contact Poster update is another new feature iPhone users have been getting hyped about. This allows iPhone users to design a custom image that will show up when making calls. The update that allows users to choose their own caller ID photo and will give iPhone users a more consistent look no matter who they’re calling, Apple has said.

    iPhone users will also be able to personalize their contact card “poster” with a photo or memoji of choice.

    Autocorrect is also getting a comprehensive update, Apple said, with a transformer language model — or “a state-of-the-art on-device machine learning language model for word prediction,” according to the company.

    This refreshed design better supports typing and offers sentence-level autocorrections that can fix more types of grammatical mistakes. iPhone users will also now receive predictive text recommendations in-line as they type, making adding entire words or completing sentences as easy as tapping the space bar.

    The new iOS keyboard will also learn your habits over time, such as fixing words that you frequently misspell and leaving words alone that you intentionally thumbed in. As Craig Federighi, Apple’s head of software, put it in June: “In those moments where you just want to type a ducking word, well, the keyboard will learn it, too.”

    New StandBy mode, Journal app and much more

    iOS17 also introduces StandBy, a new full-screen experience with glanceable information designed to be viewed from a distance when the iPhone is on its side and charging. For example, when charging your iPhone at your nightstand or desk, you can personalize the display to feature a clock, favorite photos, or your most-used widgets.

    Apple’s new Journal app, which aims to help users reflect and practice gratitude through the daily practice of journaling, will also be available in a software update later this year.

    And there’s a whole lot more: Check out Apple’s handy 17-page guide on all of the newest features coming to iOS17.

    [ad_2]

    Source link

  • Zuckerberg unveils Quest 3 as Meta tries to stay ahead in the mixed reality headset game | CNN Business

    Zuckerberg unveils Quest 3 as Meta tries to stay ahead in the mixed reality headset game | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Meta is moving forward in its efforts to dominate the AR world with the new and improved Meta Quest 3.

    Unveiled by CEO Mark Zuckerberg at the company’s virtual Meta Connect event Wednesday, the headset starts at $500 and is a complete redesign of earlier models. The Quest 3, first announced in June, offers improved performance, immersive new mixed-reality features and a sleeker, more comfortable design.

    With a much stronger processor, higher-resolution display, revamped Touch Plus controllers and a 40% slimmer physique, the Quest 3 is a big step up from its predecessors. The Meta Quest 2 allows for strictly virtual reality, while the Meta Quest Pro has advanced passthrough cameras for seeing your actual surroundings, but it costs a whopping $1,000.

    Most importantly, the Quest 3 has support for Meta Reality, allowing users to enjoy mixed-reality experiences that blend the real world with the virtual one — for example, you can play a virtual piano on your real-life coffee table.

    “If you pick up a digital ball and throw it at the physical wall, it’ll bounce off it,” Zuckerberg said at Meta Connect Wednesday. “If someone’s shooting at you and you want to duck the fire, you just get behind your physical couch.”

    The Meta Quest virtual library is fully accessible with the Quest 3 – a library that now features VR-friendly Roblox, released Wednesday, and is set to add X Box cloud gaming in December, giving gamers the chance to play titles like Halo and Minecraft on a large screen anywhere.

    The headset is available for preorder now and officially hit stores on Oct. 10, available in two storage options (128GB and 512GB).

    Zuckerberg explains features of the new Quest 3 headset on September 27, 2023.

    Meta’s newest headset comes three years after the Quest 2, under a year after the Quest Pro and under four months after the Apple Vision Pro.

    Dubbed by Zuckerberg as the “first mainstream mixed reality headset” the Quest 3 is part of an ongoing arms race between two of tech’s biggest players to command the headset space – and Zuckerberg’s personal vision for a next-generation internet where users can interact with each other in virtual spaces resembling real life. And it comes in at a much cheaper price than the Apple alternative (which will cost you $3,499, to be exact) and is still mainly a VR headset with alternative reality options, while Apple’s product is a dedicated mixed reality experience.

    To get ahead of Apple’s June unveiling of the Vision Pro, Zuckerberg teased the Meta Quest 3 just days before its rival’s big announcement. But the two companies had a tense relationship even before Apple’s entry into the market. They have competed over news and messaging features, and their CEOs have traded jabs over data privacy and app store policies. Last February, Meta said it expected to take a $10 billion hit in 2022 from Apple’s move to limit how apps like Facebook collect data for targeted ads.

    Meta has until now been the dominant player in the headset market, but it has so far struggled to attract a mainstream audience for its VR headset products. The Wall Street Journal reported last year that Meta had just 200,000 active users in Horizon Worlds, its app for socializing in VR. And in 2023, IDC estimates just 10.1 million AR/VR headsets will ship globally from the entire market, far below the tens of millions of iPhones Apple sells each quarter.

    Morgan Stanley analysts called Apple’s Vision Pro a “moonshot” effort following its June announcement, saying the product “has the potential to become Apple’s next compute platform,” but that the company has “much to prove” before the headset’s launch next year.

    The biggest fight may not be between tech giants, but for the general public’s acceptance. Many analysts say the biggest hurdle to consumer adoption of mixed reality headsets is ensuring a wide range of potential use cases and experiences available on the devices. While Meta has introduced features that let users play games, explore virtual worlds, watch YouTube videos, workout, chat with friends and more, it has yet to convince most consumers that the device is worthwhile.

    [ad_2]

    Source link