ReportWire

Tag: iab-artificial intelligence

  • TV and film writers are fighting to save their jobs from AI. They won’t be the last | CNN Business

    TV and film writers are fighting to save their jobs from AI. They won’t be the last | CNN Business

    [ad_1]



    CNN
     — 

    By any standard, John August is a successful screenwriter. He’s written such films as “Big Fish,” “Charlie’s Angels” and “Go.” But even he is concerned about the impact AI could have on his work.

    A powerful new crop of AI tools, trained on vast troves of data online, can now generate essays, song lyrics and other written work in response to user prompts. While there are clearly limits for how well AI tools can produce compelling creative stories, these tools are only getting more advanced, putting writers like August on guard.

    “Screenwriters are concerned about our scripts being the feeder material that is going into these systems to generate other scripts, treatments, and write story ideas,” August, a Writers Guild of America (WGA) committee member, told CNN. “The work that we do can’t be replaced by these systems.”

    August is one of the more than 11,000 members of the WGA who went on strike Tuesday morning, bringing an immediate halt to the production of some television shows and possibly delaying the start of new seasons of others later this year.

    WGA is demanding a host of changes from the Alliance of Motion Picture and Television Producers (AMPTP), from an increase in pay to receiving clear guidelines around working with streaming services. But as part of their demands, the WGA is also fighting to protect their livelihoods from AI.

    In a proposal published on WGA’s website this week, the labor union said AI should be regulated so it “can’t write or rewrite literary material, can’t be used as source material” and that writers’ work “can’t be used to train AI.”

    August said the AI demand “was one of the last things” added to the WGA list, but that it’s “clearly an issue writers are concerned about” and need to address now rather than when their contact is up again in three years. By then, he said, “it may be too late.”

    WGA said the proposal was rejected by AMPTP, which countered by offering annual meetings to discuss advancements in the technology. August said AMPTP’s response shows they want to keep their options open.

    In a document sent to CNN responding to some of WGA’s asks, AMPTP said it values the work of creatives and “the best stories are original, insightful and often come from people’s own experiences.”

    “AI raises hard, important creative and legal questions for everyone,” it wrote. “Writers want to be able to use this technology as part of their creative process, without changing how credits are determined, which is complicated given AI material can’t be copyrighted. So it’s something that requires a lot more discussion, which we’ve committed to doing.”

    It added that the current WGA agreement defines a “writer” as a “person,” and said “AI-generated material would not be eligible for writing credit.”

    The writers’ attempt at bargaining over AI is perhaps the most high-profile labor battle yet to address concerns about the cutting-edge technology that has captivated the world’s attention in the six months since the public release of ChatGPT.

    Goldman Sachs economists estimate that as many as 300 million full-job jobs globally could be automated in some way by the newest wave of AI. White-collar workers, including those in administrative and legal roles, are expected to be the most affected. And the impact may hit sooner than some think: IBM’s CEO recently suggested AI could eliminate the need for thousands of jobs at his company alone in the next five years.

    David Gunkel, a professor at the department of communications at Northern Illinois University who tracks AI in media and entertainment, said screenwriters want clear guidelines around AI because “they can see the writing on the wall.”

    “AI is already displacing human labor in many other areas of content creation—copywriting, journalism, SEO writing, and so on,” he said. “The WGA is simply trying to get out-in-front of and to protect their members against … ‘technological unemployment.’”

    While film and TV writers in Hollywood may currently be leading the charge, professionals in other industries will almost certainly be paying attention.

    “There’s certainly other industries that need to be paying close attention to this space,” said Rowan Curran, an analyst at Forrester Research who focuses on AI. He noted that digital artists, musicians, engineers, real estate professionals and customer service workers will all feel the impact of generative AI.

    “Watch this #WGA strike carefully,” Justine Bateman, a writer, director and former actress, wrote in a tweet shortly after the strike kicked off. “Understand that our fight is the same fight that is coming to your professional sector next: it’s the devaluing of human effort, skill, and talent in favor of automation and profits.”

    AI has had a place in Hollywood for years. In the 2018 “Marvel Avengers Infinity Wars” film, the face of Thanos – a character played by actor Josh Brolin – was created in part with the technology.

    Crowd and battle scenes in films including the “Lord of the Rings” and “Meg” have utilized AI, and the most recent Indiana Jones used it to make Harrison Ford’s character appear younger. It’s also been used for color correction, finding footage more quickly during post production and making improvements such as removing scratches and dust from footage.

    But AI in screenwriting is in its infancy. In March, a “South Park” episode called “Deep Learning,” was co-written by ChatGPT and the tool was highly focused on in the plot (the characters use ChatGPT to talk to girls and write school papers).

    August said writers are largely willing to play ball with tools, as long as they’re used as launching pads or for research and writers are still credited and utilized throughout the production process.

    “Screenwriters are not luddites, and we’ve been quick to use new technologies to help us tell our stories,” August said. “We went from typewriters to word processors happily and it increased productivity. …. But we don’t need a magical typewriter that types scripts all by itself.”

    Because large language models are trained on text that humans have written before, and find patterns in words and sentences to create responses to prompts, concerns around intellectual property exist, too. “It is entirely possible for a [chatbot] to generate a script in the style of a particular kind of filmmaker or scriptwriter without prior consent of the original artist or the Hollywood studio that holds the IP for that material,” Gunkel said.

    For example, one could prompt ChatGPT to generate a zombie apocalypse drama in the style of David Mamet. “Who should get credited for that?” August said. “What happens if we allow a producer or studio executive to come up with a treatment or pitch or something that looks like a screenplay that no writer has touched?”

    For now, the legal landscape remains very much unsettled on the matter, with regulations lagging behind the rapid pace of AI development. In early April, the Biden administration said it is seeking public comments on how to hold artificial intelligence systems like ChatGPT accountable.

    “We can’t protect studios from their own bad choices,” August said. “We can only protect writers from abuses.”

    The strike, and the demands around AI specifically, come at a time when both the writers and the studios are feeling financial pain.

    Many of the businesses represented by AMPTP have seen drops in their stock price, prompting deep cost cutting, including layoffs. The need to manage costs, combined with addressing the fallout from the strike, might only make the companies feel more pressure to turn to AI for scriptwriting.

    “In the short term, this could be an effective way to circumvent the WGA strike, mainly because [large language models], which are considered property and not personnel, can be employed for this task without violating the picket line,” Gunkel said. Such an “experiment” could also show production studios whether it’s possible “to get by with less humans involved,” he said.

    But Joshua Glick, a visiting professor of film and electronic arts at Bard University, believes such a move would be ill-advised.

    “It would be a pretty aggressive and antagonistic move for studios to move forward with AI-generated scripts in terms of getting writers to come to the negotiating table because AI is such a crucial sticking point in the negotiations,” said Glick, who also co-created Deepfake: Unstable Evidence on Screen, an exhibition at the Museum of the Moving Image in New York.

    “At the same time, I think the result of those scripts would be pretty mediocre at best,” he said.

    However the studios react, the issue is unlikely to go away in Hollywood. Film and TV actors’ contracts are up in June, and many are worried about how their faces, bodies and voices will be impacted by AI, August said.

    “As writers, we don’t want tools to replace us but actors have the same concerns with AI, as do directors, editors and everyone else who does creative work in this industry,” he added.

    [ad_2]

    Source link

  • This could be Apple’s biggest product launch since the Apple Watch | CNN Business

    This could be Apple’s biggest product launch since the Apple Watch | CNN Business

    [ad_1]



    CNN
     — 

    Apple may be just one day away from unveiling its most ambitious new hardware product in years.

    At its Worldwide Developers Conference, which kicks off Monday at its Cupertino, California, campus, Apple

    (AAPL)
    is widely expected to introduce a “mixed reality” headset that offers both virtual reality and augmented reality, a technology that overlays virtual images on live video of the real world.

    The highly anticipated release of an AR/VR headset would be Apple’s biggest hardware product launch since the debut of the Apple Watch in 2015. It could signal a new era for the company and potentially revolutionize how millions interact with computers and the world around them.

    But the headset is just one of many announcements expected at the developers event. Apple will also show off a long list of software updates that will shape how people use its most popular devices, including the iPhone and Apple Watch.

    Apple may also tease how it plans to incorporate AI into more of its products and services, and keep pace with a renewed arms race over the technology in Silicon Valley.

    The event will be livestreamed on Apple’s website and YouTube. It is set to start at 10:00 a.m. PT/1:00 p.m. ET.

    Here’s a closer look at what to expect:

    For years, Apple CEO Tim Cook has expressed interest in augmented reality. Now Apple finally appears ready to show off what it’s been working on.

    According to Bloomberg, the new headset, which could be called Reality One or Reality Pro, will have an iOS-like interface, display immersive video and include cameras and sensors to allow users to control it via their hands, eye movements and with Siri. The device is also rumored to have an outward-facing display that will show eye movements and facial expressions, allowing onlookers to interact with the person wearing the headset without feeling as though they’re talking to a robot.

    Apple’s new headset is expected to pack apps for gaming, fitness and meditation, and offer access to iOS apps such as Messages, FaceTime and Safari, according to Bloomberg. With the FaceTime option, for example, the headset will “render a user’s face and full body in virtual reality,” to create the feeling that both are “in the same room.”

    The decision to unveil it at WWDC suggests Apple wants to encourage developers to build apps and experiences for the product in order to make it more compelling for customers and worth the hefty price tag.

    The company is reportedly considering a $3,000 price tag for the device, far more than most of its products and testing potential buyers at a time of lingering uncertainty in the global economy. Other tech companies have struggled to find mainstream traction for headsets. And in the years that Apple has been rumored to be working on the product, the tech community has shifted its focus from VR to another buzzy technology: artificial intelligence.

    But if any company can prove skeptics wrong, it’s Apple. The company’s entry into the market combined with its vast customer base has the potential to breathe new life into the world of headsets.

    A mixed reality headset may not be the only piece of hardware to get stage time this year.

    Apple is expected to launch a new 15-inch MacBook Air packing the company’s M2 processor. The current size of the MacBook Air is 13 inches.

    Previously, users who wanted a larger-sized Apple laptop would need to buy a higher-end MacBook Pro.

    Considering WWDC is traditionally a software event, Apple executives will likely spend much of the time highlighting the changes and upgrades coming to its next-generation mobile operating systems, iOS 17 and iPadOS 17.

    While last year’s updates included a major design overhaul of the lock screen and iMessage, only minor changes are expected this year.

    With iOS 17, Apple is expected to double down on its efforts around health tracking by adding the ability to monitor everything from a user’s mood to keeping tabs on how their vision may change over time. According to the Wall Street Journal, Apple will also launch a journaling app not only as a way for users to log their thoughts but also activity levels, which can then be analyzed to reveal how much time someone spends at home or out of the house.

    The new iOS 17 is also said to get a lock screen refresh: When positioned in horizontal mode, the display will highlight widgets tied to the calendar, weather and other apps, serving as a digital hub. (iPadOS 17 is also expected to get some of the same lock screen capabilities and health features.)

    Other anticipated upgrades include an Apple Watch OS update that would focus on quick glances at widgets, and more details about its next-generation CarPlay platform, which it initially teased last year.

    While much of the focus of the event may be on VR, Apple may also attempt to show how it’s keeping pace with Silicon Valley’s current obsession: artificial intelligence.

    Apple reportedly plans to preview an AI-powered digital coaching service, which will encourage people to exercise and improve their sleeping and eating habits. It’s unclear how it could work, but the effort comes at a time when Big Tech companies are racing to introduce AI-powered technologies in the wake of ChatGPT’s viral success.

    Apple may also demo and expand on some of its recently teased accessibility tools for the iPhone and iPad, including a feature that promises to replicate a user’s voice for phone calls after only 15 minutes of training.

    Most of the other Big Tech companies have recently outlined their AI strategies. This event may be Apple’s chance to do the same.

    [ad_2]

    Source link

  • ‘Serious concerns’: Top companies raise alarm over Europe’s proposed AI law | CNN Business

    ‘Serious concerns’: Top companies raise alarm over Europe’s proposed AI law | CNN Business

    [ad_1]


    Dortmund, Germany
    CNN
     — 

    Dozens of Europe’s top business leaders have pushed back on the European Union’s proposed legislation on artificial intelligence, warning that it could hurt the bloc’s competitiveness and spur an exodus of investment.

    In an open letter sent to EU lawmakers Friday, C-suite executives from companies including Siemens

    (SIEGY)
    , Carrefour

    (CRERF)
    , Renault

    (RNLSY)
    and Airbus

    (EADSF)
    raised “serious concerns” about the EU AI Act, the world’s first comprehensive AI rules.

    Other prominent signatories include big names in tech, such as Yann LeCun, chief AI scientist of Meta

    (FB)
    , and Hermann Hauser, founder of British chipmaker ARM.

    “In our assessment, the draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” the group of more than 160 executives said in the letter.

    They argue that the draft rules go too far, especially in regulating generative AI and foundation models, the technology behind popular platforms such as ChatGPT.

    Since the craze over generative AI began this year, technologists have warned of the potential dark side of systems that allow people to use machines to write college essays, take academic tests and build websites. Last month, hundreds of top experts warned about the risk of human extinction from AI, saying mitigating that possibility “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The EU proposal applies a broad brush to such software “regardless of [its] use cases,” and could push innovative companies and investors out of Europe because they would face high compliance costs and “disproportionate liability risks,” according to the executives.

    “Such regulation could lead to highly innovative companies moving their activities abroad” and investors withdrawing their capital from European AI, the group wrote.

    “The result would be a critical productivity gap between the two sides of the Atlantic.”

    The executives are calling for policymakers to revise the terms of the bill, which was agreed upon by European Parliament lawmakers earlier this month and is now being negotiated with EU member states.

    “In a context where we know very little about the real risks, the business model, or the applications of generative AI, European law should confine itself to stating broad principles in a risk-based approach,” the group wrote.

    The business leaders called for a regulatory board of experts to oversee these principles and ensure they can be continuously adapted to changes in the fast-moving technology.

    The group also urged lawmakers to work with their US counterparts, noting that regulatory proposals had also been made in the United States. EU lawmakers should try to “create a legally binding level playing field,” the executives wrote.

    If such action isn’t taken and Europe is constrained by regulatory demands, it could hurt the region’s international standing, the group suggested.

    “Like the invention of the Internet or the breakthrough of silicon chips, generative AI is the kind of technology that will be decisive for the performance capacity and therefore the significance of different regions,” it said.

    Tech experts have increasingly called for greater regulation of AI as it becomes more widely used. In recent months, the United States and China have also laid out plans to regulate the technology. Sam Altman, CEO of ChatGPT maker OpenAI, has used high-profile trips around the world in recent weeks to call for co-ordinated international regulation of AI.

    The EU rules are the world’s “first ever attempt to enact” legally binding rules that apply to different areas of AI, according to the European Parliament.

    Negotiators of the AI Act hope to reach an agreement before the end of the year, and once the final rules are adopted by the European Parliament and EU member states, the act will become law.

    As they stand now, the rules would ban AI systems deemed to be harmful, including real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems, such as those in China.

    The Act also outlines transparency requirements for AI systems. For instance, systems such as ChatGPT would have to disclose that their content was AI-generated and provide safeguards against the generation of illegal content.

    Engaging in prohibited AI practices could lead to hefty fines: up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher.

    But penalties would be “proportionate” and consider the market position of small-scale providers, suggesting there could be some leniency for startups.

    Not everyone has pushed back on the legislation so far. Earlier this month, Digital Europe, a trade association that counts SAP

    (SAP)
    and Ericsson

    (ERIC)
    among its members, called the rules “a text we can work with.”

    “However, there remain some areas which can be improved to ensure Europe becomes a competitive hub for AI innovation,” the group said in a statement.

    Dragos Tudorache, a Romanian member of parliament who led the bill’s drafting, said he was convinced that those who signed the new letter “have not read the text but have rather reacted on the stimulus of a few.”

    “The only concrete suggestions made are in fact what the [draft] text now contains: an industry-led process for defining standards, governance with industry at the table, and a light regulatory regime that asks for transparency. Nothing else,” he said in a statement.

    “It is a pity that the aggressive lobby of a few is capturing other serious companies in the net, which unfortunately undermines the undeniable lead that Europe has taken.”

    Brando Benifei, an Italian member of parliament who also led the drafting of the legislation, told CNN “we will listen to all concerns and stakeholders when dealing with AI regulation, but we have a firm commitment to deliver clear and enforceable rules.”

    “Our work could positively affect the global conversation and direction when dealing with artificial intelligence and its impact on fundamental rights, without hindering the necessary pursuit of innovation,” he said.

    [ad_2]

    Source link

  • Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development | CNN Business

    Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development | CNN Business

    [ad_1]



    CNN
     — 

    Some of the world’s top artificial intelligence companies are launching a new industry body to work together — and with policymakers and researchers — on ways to regulate the development of bleeding-edge AI.

    The new organization, known as the Frontier Model Forum, was announced Wednesday by Google, Microsoft, OpenAI and Anthropic. The companies said the forum’s mission would be to develop best practices for AI safety, promote research into AI risks, and to publicly share information with governments and civil society.

    Wednesday’s announcement reflects how AI developers are coalescing around voluntary guardrails for the technology ahead of an expected push this fall by US and European Union lawmakers to craft binding legislation for the industry.

    News of the forum comes after the four AI firms, along with several others including Amazon and Meta, pledged to the Biden administration to subject their AI systems to third-party testing before releasing them to the public and to clearly label AI-generated content.

    The industry-led forum, which is open to other companies designing the most advanced AI models, plans to make its technical evaluations and benchmarks available through a publicly accessible library, the companies said in a joint statement.

    “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Microsoft president Brad Smith. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

    The announcement comes a day after AI experts such as Anthropic CEO Dario Amodei and AI pioneer Yoshua Bengio warned lawmakers of potentially serious, even “catastrophic” societal risks stemming from unrestrained AI development.

    “In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology,” Amodei said in his written testimony.

    Within two to three years, Amodei said, AI could become powerful enough to help malicious actors build functional biological weapons, where today those actors may lack the specialized knowledge needed to complete the process.

    The best way to prevent major harms, Bengio told a Senate panel, is to restrict access to AI systems; develop standard and effective testing regimes to ensure those systems reflect shared societal values; limit how much of the world any single AI system can truly understand; and constrain the impact that AI systems can have on the real world.

    The European Union is moving toward legislation that could be finalized as early as this year that would ban the use of AI for predictive policing and limit its use in lower-risk scenarios.

    US lawmakers are much further behind. While a number of AI-related bills have already been introduced in Congress, much of the driving force for a comprehensive AI bill rests with Senate Majority Leader Chuck Schumer, who has prioritized getting members up to speed on the basics of the industry through a series of briefings this summer.

    Starting in September, Schumer has said, the Senate will hold a series of nine additional panels for members to learn about how AI could affect jobs, national security and intellectual property.

    [ad_2]

    Source link

  • Who says romance is dead? Couples are using ChatGPT to write their wedding vows | CNN Business

    Who says romance is dead? Couples are using ChatGPT to write their wedding vows | CNN Business

    [ad_1]



    CNN
     — 

    When Elyse Nguyen was nearing her wedding date in February and still hadn’t started writing her vows, a friend suggested she try a new source of inspiration: ChatGPT.

    The AI chatbot, which was released publicly in late November, can generate compelling written responses to user prompts and offers the promise of helping people get over writer’s block, whether it be for an essay, an email, or an emotional speech.

    “At first we inputted the prompt as a joke and the output was pretty cheesy with personal references to me and my husband,” said Nguyen, a financial analyst at Qualcomm. “But the essence of what vows should incorporate was there – our promises to each other and structure.”

    She made edits, changed the prompts to add humor and details about her partner’s interests, and added some personal touches. Nguyen ultimately ended up using a good portion of ChatGPT’s suggestions and said her husband was on board with it.

    “It helped alleviate some stress because I had no prior experience with wedding vows nor did I know what should be included,” Nguyen said. “Plus, ChatGPT is a genius with alliteration, analogies and metaphors. Having something like, ‘I promise to be your partner in life with the enthusiasm of a golfer’s first hole in one’ in my back pocket was comical.”

    Nearly five months after ChatGPT went viral and ignited a new AI arms race in Silicon Valley, more couples are looking to it for help with wedding planning, including writing vows and speeches, drafting religious marriage contracts, and setting up websites for the special day.

    Ellen Le recently created some of her wedding website through a new Writer’s Block Assistant tool on online wedding planning service Joy, which was one of the first third-party platforms to incorporate ChatGPT’s technology. (Last month, OpenAI, the company behind ChatGPT, opened up access to the chatbot, paving the way for it to be integrated into numerous apps and services.)

    Le, a product manager at a startup, said she used the feature to draft an “about us” page and write directions from San Francisco to her Napa Valley wedding. The Writer’s Block Assistant tool helps users write vows, best man and maid of honor speeches, thank you cards and wedding website “about us” pages. It also lets users highlight personal stories and select the style or tone before pulling it into a speech.

    “I started drafting my vows and when I typed in how we met, it produced this very delightful story,” Le said. “Some of it was inaccurate, making up certain details, but it gave me a helping hand and something to react to, rather than just spending 10 hours thinking about how to get started.”

    Le said her fiance, who often uses ChatGPT for work, is considering using AI to help with his vows too.

    Joy co-founder and CEO Vishal Joshi, who studied artificial intelligence and electrical engineering at NIT Rourkela in India, said the company launched Writer’s Block Assistant in March after it conducted an internal study that found most of its users were somewhat overwhelmed with getting started on writing vows and speeches, and wished they had help. He said the company has already seen thousands of submissions since launching the tool.

    “Almost two decades ago, AI enthusiasts like myself and my research peers had only dreamt of mass market adoption we are seeing today, and we know this is just the true beginning,” Joshi said. “Just like smartphones, if applied well, the positive impact of AI on our lives can far outshine the negatives. We’re working on responsibly innovating using AI to advance the wedding and event industry as a whole.”

    Michael Grinn and Kate Gardiner used viral AI tool ChatGPT to write the Ketubah, a Jewish wedding contract, for their June wedding.

    ChatGPT has sparked concerns in recent months about its potential to perpetuate biases, spread misinformation and upend certain livelihoods. Now, as it finds its way into marriage ceremonies, it could raise more nuanced questions about whether people risk losing something by injecting technology into what is supposed to be a deeply personal and, for many, spiritual moment in life.

    Michael Grinn, an anesthesiologist with practices in Miami and New York, was experimenting with ChatGPT when he asked it to produce a traditional Ketubah – a Jewish marriage contract – for his upcoming June wedding.

    Grinn and his fiance Kate Gardiner, the founder and CEO of a public relations firm, then requested it make some language changes around gender equality and intimacy. “At the end, we both looked at each other and were like, we can’t disagree with the result,” he said.

    Editing took about an hour, but it still shaved hours off what otherwise could have been a lengthy process, he said. Still, Grinn plans to write his own vows. “I want them to be less refined and something no one else helped me with.”

    He does, however, plan to use ChatGPT for inspiration for officiating his best man’s wedding. “It mostly comes down to time because I’ve been working so much,” he said, “and this is so efficient.”

    [ad_2]

    Source link

  • A foldable phone, new tablet and lots of AI: What Google unveiled at its big developer event | CNN Business

    A foldable phone, new tablet and lots of AI: What Google unveiled at its big developer event | CNN Business

    [ad_1]



    CNN
     — 

    Google on Wednesday unveiled its latest lineup of hardware products, including its first foldable phone and a new tablet, as well as plans to roll out new AI features to its search engine and productivity tools.

    The updates, announced at its annual Google I/O developer conference, come as the company is simultaneously trying to push beyond its core advertising business with new devices while also racing to defend its search engine from the threat posed by a wave of new AI-powered tools.

    In a sign of where Google’s focus currently lies, the company spent more than 90 minutes teasing a long list of new AI features before mentioning hardware updates.

    Here’s what Google announced at the event.

    Google became the latest tech company to unveil a foldable smartphone. Like other foldables, the $1799 Pixel Fold features a vertical hinge that can be opened to reveal a tablet-like display. But Google calls the Fold the thinnest foldable on the market.

    “It took some clever engineering work redesigning components like our speakers, our battery and haptics,” said George Hwang, a product manager at Google, on a call ahead of the announcement. The company packed a Pixel phone into a less than 6 mm body – about two thirds of the thickness of its other Pixel phones.

    The Pixel Fold is very much a phone first: when it’s unfolded, it opens up into a 7.6-inch screen, and moves on Google’s custom-built 180-degree hinge. That hinge mechanism is moved out entirely from under the display to improve its dust resistance and decrease the device’s overall thickness, according to the company.

    The Google Fold includes features you’d find on a Pixel, such as long exposure, unblur, magic eraser, which lets users remove unwanted or distracting object. It also has Pixel Fold-specific tools such as dual-screen live translate, which lets a user communicate in another language with the help of fast audio and text translations on the outer screen.

    Google said it optimized its top apps to take advantage of the larger screen but “there’s still work to be done” because “optimizing for a new foldable form factor takes time,” Hwang said. “It’s a process that we’re committed to and it requires steep investment with our developer partners across Android,” Hwang added.

    Google is far from the first to embrace foldables, but it’s possible it waited to launch its own version until the technology became more advanced. Early versions of the Samsung Galaxy Z Fold, for example, had issues with the screen and most apps were not well optimized for the design.

    But even now, the future for foldables remains uncertain. Most apps are still not optimized for foldable devices; prices remain very high; and Google’s chief rival, Apple, has yet to embrace the option.

    Despite great consumer interest in foldable phones — and a resurgence in 90s-style flip phones among celebrities and TikTok influencers — the foldable market is relatively small, with Samsung dominating the category, followed by others including Motorola, Lenovo, Oppo, and Huawei. According to ABI Research, foldable and flexible displays made up about 0.7% of the smartphone market in 2021, and in 2022 expected to fall just shy of 2%.

    The Pixel Fold will be available in the US, UK, Germany and Japan. The company said the device will start shipping next month.

    A look at the Google's Pixel 7a lineup

    On the surface, the 7a looks similar to the Pixel 7 and 7 Pro, with the same pixel camera bar along the back. It comes with the typical advancements you’d expect to find with any smartphone upgrade – better display, advanced camera and longer-lasting battery. But the 7a now boasts a Tensor G2 processor and a TItan M2 security chip, which brings advanced processing and new artificial intelligence features. It also offers wireless charging for the first time on an A model.

    The Pixel lineup has long been known for its cameras, and the 7a is no exception. It’s packed with upgrades, including a 64-megapixel main camera – the largest sensor on a Pixel A series to date, which will help with improved image quality, low light performance and other features. It also offers a new 13-megapixel ultra-wide camera for capturing even wider shots and a new 13-megapixel front camera. For the first time, each camera enables 4K video.

    The 7a also supports many significant Pixel features, including unblur, magic eraser and an improved Night Sight that’s two times faster and sharper than its predecessor. It also allows users to capture long exposure and enhanced zoom.

    The Pixel comes in several colors, including charcoal, snow, sea and coral, and starts at $499 via the Google Store on May 10.

    The Pixel Series A line has long been aimed at the cost conscious who want good features at a reasonable price, but its reach is limited. Google sells between eight to 10 million of the Pixel devices each year, according to ABI Research.

    “Generally, the smartphones were really meant for Google to showcase how software, and now AI capabilities, could be effectively optimized on hardware and improve the Android user experience,” said David McQueen, an analyst at ABI Research. “Google has purposely kept volume sales limited as it also has to be mindful of its relationship with other smartphone manufacturers that use the Android OS.”

    The Google Pixel tablet

    While phones were a key focus at the event, Google also refreshed other parts of its hardware lineup.

    Google introduced the Pixel Tablet, which is intended for use around the house, from turning off the lights off in the house to setting the thermostat without getting off the couch.

    The tablet, which has rounded edges and corners, comes in three colors: porcelain, hazel and rose, and starts at $499. It will be available on June 20.

    Under the hood, the 11-inch tablet is powered by Google’s Tensor G2 chips, which bring long-lasting battery life and AI features to the device. It also offers a front-facing camera, an 8-megapixel rear camera, and a charging dock.

    Google is also moving forward with plans to bring AI chat features to its core search engine amid a renewed arms race over the technology in Silicon Valley.

    The company said it is introducing the next evolution of Google Search, which will use an AI-powered chatbot to answer questions “you never thought Search could answer” and to help get users the information they want quicker than ever.

    With the update, the look and feel of Google Search results will be noticeably different. When users type a query into the main search bar, they will automatically see a pop-up an AI-generated response in addition to displaying traditional results.

    Users can now sign up for the new Google Search, which will first launch in the United States, via the Google app or Chrome’s desktop browser. A limited number of users will have access to it in the weeks ahead, according to the company, before it scales upward.

    Google is expanding access to its existing chatbot Bard, which operates outside the search engine and can help users do tasks such as outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

    The tool, which was previously available to early users via a waitlist only in the US, will soon be available for all users in 120 countries and 40 languages.

    Google is also launching extensions for Bard from its own services, such as Gmail, Sheets and Docs, allowing users to ask questions and collaborate with the chatbot within the apps they’re using.

    Google also announced PaLM 2, its latest large language model to rival ChatGPT-creator OpenAI’s GPT-4.

    The move marks a big step forward for the technology that powers the company’s AI products and promises to be better at logic, common sense reasoning and mathematics. It can also generate specialized code in different programming languages.

    [ad_2]

    Source link

  • The US Senate is working to get up to speed on AI basics ahead of any legislation | CNN Business

    The US Senate is working to get up to speed on AI basics ahead of any legislation | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The US Senate is inching forward on a plan to regulate artificial intelligence, after months of seeing how ChatGPT and similar tools stand to supercharge — or disrupt— wide swaths of society.

    But despite outlining broad contours of the plan, senators are still likely months away from introducing a comprehensive bill setting guardrails for the industry, let alone passing legislation and getting it signed into law. The deliberate pace of progress contrasts with the blistering speed with which companies and organizations have embraced generative AI, and the flood of investment into the industry.

    The Senate’s plan calls for briefing lawmakers on the basic facts of artificial intelligence over the summer, before beginning to consider legislation in the following months, even as some senators have begun to pitch proposals.

    The efforts reflect how, despite urgent calls by civil society groups and industry for guardrails on the technology, many lawmakers are still getting up to speed.

    To help educate members, Senate Majority Leader Chuck Schumer on Tuesday announced a series of three senators-only information sessions to take place in the coming weeks.

    The closed-door briefings will cover topics ranging from AI’s current capabilities and competition in AI development to how US national security and defense agencies are already putting the technology to use. The latter session, Schumer said, will be the first-ever classified senators’ briefing on AI.

    “The Senate must deepen our expertise in this pressing topic,” Schumer wrote in a letter to colleagues announcing the briefings. “AI is already changing our world, and experts have repeatedly told us that it will have a profound impact on everything from our national security to our classrooms to our workforce, including potentially significant job displacement.”

    Schumer had earlier kicked off a high-level push for AI legislation in April, when he proposed shaping any eventual bill around four principles promoting transparency and democratic values.

    The information sessions are expected to wrap up by the time Congress breaks for August recess, according to South Dakota Republican Sen. Mike Rounds, one of three other senators Schumer has tapped to lead on a comprehensive AI bill.

    By that point, Rounds told reporters Wednesday on the sidelines of a Washington conference, there may be “lots of different ideas floating” but not necessarily a bill to speak of.

    Schumer, Rounds and the other leading lawmakers on the AI working group — New Mexico Democratic Sen. Martin Heinrich and Indiana Republican Sen. Todd Young — haven’t settled on how to coordinate various legislative proposals yet.

    Options include forming a select committee to craft a comprehensive AI bill, or “splitting out and having lots of different committees come up with different pieces of legislation,” Rounds said.

    The AI hype has produced high-profile hearings and scattershot policy proposals. Last month, OpenAI CEO Sam Altman testified before a Senate Judiciary subcommittee, wowing lawmakers by asking for regulation and by giving a technical demonstration to enthralled members of the House the evening before.

    Sen. Michael Bennet has introduced legislation to create a new federal agency with authority to regulate AI, for example. And on Wednesday, Sen. Josh Hawley unveiled his own framework for AI legislation that called for letting Americans sue companies for harms created by AI models.

    Rounds told reporters Schumer has not set a timeframe for coming up with AI legislation, adding that the current goal is to allow ideas to “melt for a while.”

    But he predicted that with AI’s expected impact on many agencies and industries, it would be impossible not to foresee a wide-ranging and open legislative process reflecting input from many sources, akin to how the Senate crafts the annual spending package known as the National Defense Authorization Act.

    “You bring in all of these ideas, and then you very quietly start to meld this bill together, kind of behind the scenes in a way,” he said. “You go through a committee process in which you deliver a bill that says this could pass, and then you allow other members to come in and offer their amendments to it as well. That has worked well year-in and year-out for the NDAA.”

    [ad_2]

    Source link

  • AI is already linked to layoffs in the industry that created it | CNN Business

    AI is already linked to layoffs in the industry that created it | CNN Business

    [ad_1]



    CNN
     — 

    Many have raised alarms about the potential for artificial intelligence to displace jobs in the years ahead, but it’s already causing upheaval in one industry where workers once seemed invincible: tech.

    A small but growing number of tech firms have cited AI as a reason for laying off workers and rethinking new hires in recent months, as Silicon Valley races to adapt to rapid advances in the technology being developed in its own backyard.

    Chegg, an education technology company, disclosed in a regulatory filing last month that it was cutting 4% of its workforce, or about 80 employees, “to better position the Company to execute against its AI strategy and to create long-term, sustainable value for its students and investors.”

    IBM CEO Arvind Krishna said in an interview with Bloomberg in May that the company expects to pause hiring for roles it thinks could be replaced with AI in the coming years. (In a subsequent interview with Barrons, however, Krishna said that he felt his earlier comments were taken out of context and stressed that “AI is going to create more jobs than it takes away.”)

    And in late April, file-storage service Dropbox said that it was cutting about 16% of its workforce, or about 500 people, also citing AI.

    In its most-recent layoffs report, outplacement firm Challenger, Gray & Christmas said 3,900 people were laid off in May due to AI, marking its first time breaking out job cuts based on that factor. All of those cuts occurred in the tech sector, according to the firm.

    With these moves, Silicon Valley may not only be leading the charge in developing AI but also offering an early glimpse into how businesses may adapt to those tools. Rather than render entire skill sets obsolete overnight, as some might fear, the more immediate impact of a new crop of AI tools appears to be forcing companies to shift resources to better take advantage of the technology — and placing a premium on workers with AI expertise.

    “Over the last few months, AI has captured the world’s collective imagination, expanding the potential market for our next generation of AI-powered products more rapidly than any of us could have anticipated,” Dropbox CEO Drew Houston wrote in a note to staff announcing the job cuts. “Our next stage of growth requires a different mix of skill sets, particularly in AI and early-stage product development.”

    In response to a request for comment on how its realignment around AI is playing out, Dropbox directed CNN to its careers page, where it is currently hiring for multiple roles focused on “New AI Initiatives.”

    Dan Wang, a professor at Columbia Business School, told CNN that AI “will cause organizations to restructure,” but also doesn’t see it playing out as machines replacing humans just yet.

    “AI, as far as I see it, doesn’t necessarily replace humans, but rather enhances the work of humans,” Wang said. “I think that the kind of competition that we all should be thinking more about is that human specialists will be replaced by human specialists who can take advantage of AI tools.”

    The AI-driven tech layoffs come amid broader cuts in the industry. Many tech companies have been readjusting to an uncertain economic environment and waning levels of demand for digital services more than three years into the pandemic.

    Some 212,294 workers in the tech industry have been laid off in 2023 alone, according to data tracked by Layoffs.fyi, already surpassing the 164,709 recorded in 2022.

    But in the shadow of those mass layoffs, the tech industry has also been gripped by an AI fervor and invested heavily in AI talent and tech.

    In January, just days after Microsoft announced plans to lay off 10,000 employees as part of broader cost-cutting measures, the company also confirmed it was making a “multibillion dollar” investment into OpenAI, the company behind ChatGPT. And in March, in the same letter to staff Mark Zuckerberg used to announce plans to lay off another 10,000 workers (after cutting 11,000 positions last November), the Meta CEO also outlined plans for investing heavily in AI.

    Even software engineers in Silicon Valley who once seemed uniquely in demand now appear to be at risk of losing their jobs, or losing out on salary gains to those with more AI expertise.

    Roger Lee, a startup founder who has been tracking tech industry layoffs via his website Layoffs.fyi, also runs Comprehensive.io, which examines job listings and compensation data across some 3,000 tech companies.

    Lee told CNN that a recent analysis of data from Comprehensive.io shows the average salary for a senior software engineer specializing in artificial intelligence or machine learning is 12% higher than for those who don’t specialize in that area, a data point he dubs “the AI premium.” The average salary for a senior software engineer specializing in AI or machine learning has also increased by some 4% since the beginning of the year, whereas the average salary for senior software engineers as a whole has stayed flat, he said.

    Lee noted Dropbox as an example of a company offering notably high pay for AI roles, citing a base salary listing of $276,300 to $373,800 for a Principal Machine Learning Engineer role. (By comparison, Comprehensive.io’s data puts the current average salary for a senior software engineer at $171,895.)

    Those looking to thrive in the tech industry and beyond may need to brush up on their AI skills.

    Wang, the professor at Columbia Business School, told CNN that starting this past spring semester, he began requiring his students to familiarize themselves with the new crop of generative AI tools on the market. “That type of exposure I think is absolutely critical for setting themselves up for success and once they graduate,” Wang said.

    It’s not that everyone needs to become AI specialists, Wang added, but rather that workers should know how to use AI tools to become more efficient at whatever they’re doing.

    “That’s where the kind of a battleground for talent is really shifting,” Wang said, “as differentiation in terms of talent comes from creative and effective ways to integrate AI into daily tasks.”

    [ad_2]

    Source link

  • Amazon is ‘investing heavily’ in the technology behind ChatGPT | CNN Business

    Amazon is ‘investing heavily’ in the technology behind ChatGPT | CNN Business

    [ad_1]



    CNN
     — 

    Amazon wants investors to know it won’t be left behind in the latest Big Tech arms race over artificial intelligence.

    In a letter to shareholders Thursday, Amazon

    (AMZN)
    CEO Andy Jassy said the company is “investing heavily” in large language models (LLMs) and generative AI, the same technology that underpins ChatGPT and other similar AI chatbots.

    “We have been working on our own LLMs for a while now, believe it will transform and improve virtually every customer experience, and will continue to invest substantially in these models across all of our consumer, seller, brand, and creator experiences,” Jassy wrote in his letter to shareholders.

    The remarks, which were part of Jassy’s second annual letter to shareholder since taking over as CEO, hint at the pressure that many tech companies feel to explain how they can tap into the rapidly evolving marketplace for AI products. Since ChatGPT was released to the public in late November, Google

    (GOOG)
    , Facebook

    (FB)
    and Microsoft

    (MSFT)
    have all talked up their growing focus on generative AI technology, which can create compelling essays, stories and visuals in response to user prompts.

    Amazon’s goal, according to Jassy, is to offer less costly machine learning chips so that “small and large companies can afford to train and run their LLMs in production.” Large language models are trained on vast troves of data in order to generate responses to user prompts.

    “Most companies want to use these large language models, but the really good ones take billions of dollars to train and many years, most companies don’t want to go through that,” Jassy said in an interview with CNBC on Thursday morning.

    “What they want to do is they want to work off of a foundational model that’s big and great already, and then have the ability to customize it for their own purposes,” Jassy told CNBC.

    With that in mind, Amazon on Thursday unveiled a new service called Bedrock. It essentially makes foundation models (large models that are pre-trained on vast amounts of data) from AI21 Labs, Anthropic, Stability AI and Amazon accessible to clients via an API, Amazon said in a blog post.

    Jassy told CNBC he thinks Bedrock “will change the game for people.”

    In his letter to shareholders, Jassy also touted AWS’s CodeWhisperer, another AI-powered tool which he said “revolutionizes developer productivity by generating code suggestions in real time.”

    “I could write an entire letter on LLMs and Generative AI as I think they will be that transformative, but I’ll leave that for a future letter,” Jassy wrote. “Let’s just say that LLMs and Generative AI are going to be a big deal for customers, our shareholders, and Amazon.”

    In the letter, Jassy also reflected on leading Amazon through “one of the harder macroeconomic years in recent memory,” as the e-commerce giant cut some 27,000 jobs as part of a major bid to rein in costs in recent months.

    “There were an unusual number of simultaneous challenges this past year,” Jassy said in the letter, before outlining steps Amazon took to rethink certain free shipping options, abandon some of its physical store concepts and significantly reduce overall headcount.

    Amazon disclosed in a securities filing Thursday that Jassy’s pay package last year was valued at some $1.3 million, and that the CEO did not receive any new stock awards in 2022. (When Jassy took over as CEO in 2021, he was awarded a pay package mostly comprised of stock awards that valued his total compensation package at some $212 million.)

    Despite the challenges at Amazon, however, Jassy said in his letter that he finds himself “optimistic and energized by what lies ahead.” Jassy added: “I strongly believe that our best days are in front of us.”

    [ad_2]

    Source link

  • Chinese police detain man for allegedly using ChatGPT to spread rumors online | CNN Business

    Chinese police detain man for allegedly using ChatGPT to spread rumors online | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Police in China have detained a man they say used ChatGPT to create fake news and spread it online, in what state media has called the country’s first criminal case related to the AI chatbot.

    According to a statement from police in the northwest province of Gansu, the suspect allegedly used ChatGPT to generate a bogus report about a train crash, which he then posted online for profit. The article received about 15,000 views, the police said in Sunday’s statement.

    ChatGPT, developed by Microsoft

    (MSFT)
    -backed OpenAI, is banned in China, though internet users can use virtual private networks (VPN) to access it.

    Train crashes have been a sensitive issue in China since 2011, when authorities faced pressure to explain why state media had failed to provide timely updates on a bullet train collision in the city of Wenzhou that resulted in 40 deaths.

    Gansu authorities said the suspect, surnamed Hong, was questioned in the city of Dongguan in southern Guangdong province on May 5.

    “Hong used modern technology to fabricate false information, spreading it on the internet, which was widely disseminated,” the Gansu police said in the statement.

    “His behavior amounted to picking quarrels and provoking trouble,” they added, explaining the offense that Hong was accused of committing.

    Police said the arrest was the first in Gansu since China’s Cyberspace Administration enacted new regulations in January to rein in the use of deep fakes. State broadcaster CGTN says it was the country’s first arrest of a person accused of using ChatGPT to fabricate and spread fake news.

    Formally known as deep synthesis, deep fake refers to highly realistic textual and visual content generated by artificial intelligence.

    The new legislation bars users from generating deep fake content on topics already prohibited by existing laws on China’s heavily censored internet. It also outlines take down procedures for content considered false or harmful.

    The arrest also came amid a 100-day campaign launched by the internet branch of the Ministry of Public Security in March to crack down on the spread of internet rumors.

    Since the beginning of the year, Chinese internet giants such as Baidu

    (BIDU)
    and Alibaba

    (BABA)
    have sought to catch up with OpenAI, launching their own versions of the ChatGPT service.

    Baidu unveiled “Wenxin Yiyan” or “ERNIE Bot” in March. Two months later, Alibaba launched “Tongyi Qianwen,” which roughly translates as seeking truth by asking a thousand questions.

    In draft guidelines issued last month to solicit public feedback, China’s cyberspace regulator said generative AI services would be required to undergo security reviews before they can operate.

    Service providers will also be required to verify users’ real identities, as well as providing details about the scale and type of data they use, their basic algorithms and other technical information.

    [ad_2]

    Source link

  • Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    [ad_1]



    CNN
     — 

    Google was hit with a wide-ranging lawsuit on Tuesday alleging the tech giant scraped data from millions of users without their consent and violated copyright laws in order to train and develop its artificial intelligence products.

    The proposed class action suit against Google, its parent company Alphabet, and Google’s AI subsidiary DeepMind was filed in a federal court in California on Tuesday, and was brought by Clarkson Law Firm. The firm previously filed a similar suit against ChatGPT-maker OpenAI last month. (OpenAI did not previously respond to a request for comment on the suit.)

    The complaint alleges that Google “has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans” and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken “virtually the entirety of our digital footprint,” including “creative and copywritten works” to build its AI products.

    Halimah DeLaine Prado, Google’s general counsel, called the claims in the suit “baseless” in a statement to CNN. “We’ve been clear for years that we use data from public sources — like information published to the open web and public datasets — to train the AI models behind services like Google Translate, responsibly and in line with our AI Principles,” DeLaine Prado said.

    “American law supports using public information to create new beneficial uses, and we look forward to refuting these baseless claims,” the statement added.

    Alphabet and DeepMind did not immediately respond to a request for comment.

    The complaint points to a recent update to Google’s privacy policy that explicitly states the company may use publicly accessible information to train its AI models and tools such as Bard.

    In response to an earlier Verge report on the update, the company said its policy “has long been transparent” about this practice and “this latest update simply clarifies that newer services like Bard are also included.”

    The lawsuit comes as a new crop of AI tools have gained tremendous attention in recent months for their ability to generate written work and images in response to user prompts. The large language models underpinning this new technology are able to do this by training on vast troves of online data.

    In the process, however, companies are also drawing mounting legal scrutiny over copyright issues from works swept up in these data sets, as well as their apparent use of personal and possibly sensitive data from everyday users, including data from children, according to the Google lawsuit.

    “Google needs to understand that ‘publicly available’ has never meant free to use for any purpose,” Tim Giordano, one of the attorneys at Clarkson bringing the suit against Google, told CNN in an interview. “Our personal information and our data is our property, and it’s valuable, and nobody has the right to just take it and use it for any purpose.”

    The suit is seeking injunctive relief in the form of a temporary freeze on commercial access to and commercial development of Google’s generative AI tools like Bard. It is also seeking unspecified damages and payments as financial compensation to people whose data was allegedly misappropriated by Google. The firm says it has lined up eight plaintiffs, including a minor.

    Giordano contrasted the benefits and alleged harms of how Google typically indexes online data to support its core search engine with the new allegations of it scraping data to train AI tools.

    With its search engine, he said, Google can “serve up an attributed link to your work that can actually drive somebody to purchase it or engage with it.” Data scraping to train AI tools, however, is creating “an alternative version of the work that radically alters the incentives for anybody to need to purchase the work,” Giordano added.

    While some internet users may have grown accustomed to their digital data being collected and used for search results or targeted advertising, the same may not be true for AI training. “People could not have imagined their information would be used this way,” Giordano said.

    Ryan Clarkson, a partner at the law firm, said Google needs to “create an opportunity for folks to opt out” of having their data used for training AI while still maintaining their ability to use the internet for their everyday needs.

    [ad_2]

    Source link

  • Google-parent stock drops on fears it could lose search market share to AI-powered rivals | CNN Business

    Google-parent stock drops on fears it could lose search market share to AI-powered rivals | CNN Business

    [ad_1]



    CNN
     — 

    Shares of Google-parent Alphabet fell more than 3% in early trading Monday after a report sparked concerns that its core search engine could lose market share to AI-powered rivals, including Microsoft’s Bing.

    Last month, Google employees learned that Samsung was weighing making Bing the default search engine on its devices instead of Google’s search engine, prompting a “panic” inside the company, according to a report from the New York Times, citing internal messages and documents. (CNN has not reviewed the material.)

    In an effort to address the heightened competition, Google is said to be developing a new AI-powered search engine called Project “Magi,” according to the Times. The company, which reportedly has about 160 people working on the project, aims to change the way results appear in Google Search and will include an AI chat tool available to answer questions. The project is expected to be unveiled to the public next month, according to the report.

    In a statement sent to CNN, Google spokesperson Lara Levin said the company has been using AI for years to “improve the quality of our results” and “offer entirely new ways to search,” including with a feature rolled out last year that lets users search by combining images and words.

    “We’ve done so in a responsible and helpful way that maintains the high bar we set for delivering quality information,” Levin said. “Not every brainstorm deck or product idea leads to a launch, but as we’ve said before, we’re excited about bringing new AI-powered features to Search, and will share more details soon.”

    Samsung did not immediately respond to a request for comment.

    Google’s search engine has dominated the market for two decades. But the viral success of ChatGPT, which can generate compelling written responses to user prompts, appeared to put Google on defense for the first time in years.

    In March, Google began opening up access to Bard, its new AI chatbot tool that directly competes with ChatGPT and promises to help users outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

    At an event in February, a Google executive also said the company will bring “the magic of generative AI” directly into its core search product and use artificial intelligence to pave the way for the “next frontier of our information products.”

    Microsoft, meanwhile, has invested in and partnered with OpenAI, the company behind ChatGPT, to deploy similar technology in Bing and other productivity tools. Other tech companies, including Meta, Baidu and IBM, as well as a slew of startups, are racing to develop and deploy AI-powered tools.

    But tech companies face risks in embracing this technology, which is known to make mistakes and “hallucinate” responses. That’s particularly true when it comes to search engines, a product that many use to find accurate and reliable information.

    Google was called out after a demo of Bard provided an inaccurate response to a question about a telescope. Shares of Google’s parent company Alphabet fell 7.7% that day, wiping $100 billion off its market value.

    Microsoft’s Bing AI demo was also called out for several errors, including an apparent failure to differentiate between the types of vacuums and even made up information about certain products.

    In an interview with 60 Minutes that aired on Sunday, Google and Alphabet CEO Sundar Pichai stressed the need for companies to “be responsible in each step along the way” as they build and release AI tools.

    For Google, he said, that means allowing time for “user feedback” and making sure the company “can develop more robust safety layers before we build, before we deploy more capable models.”

    He also expressed his belief that these AI tools will ultimately have broad impacts on businesses, professions and society.

    “This is going to impact every product across every company and so that’s, that’s why I think it’s a very, very profound technology,” he said. “And so, we are just in early days.”

    [ad_2]

    Source link

  • OpenAI CEO Sam Altman to testify before Congress | CNN Business

    OpenAI CEO Sam Altman to testify before Congress | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    OpenAI CEO Sam Altman will testify before Congress next Tuesday as lawmakers increasingly scrutinize the risks and benefits of artificial intelligence, according to a Senate Judiciary subcommittee.

    During Tuesday’s hearing, lawmakers will question Altman for the first time since OpenAI’s chatbot, ChatGPT, took the world by storm late last year.

    The groundbreaking generative AI tool has led to a wave of new investment in AI, prompting a scramble among US policymakers who have called for guardrails and regulation amid fears of AI’s misuse.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    [ad_2]

    Source link

  • ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a brief moment last month, an image purporting to show an explosion near the Pentagon spread on social media, causing panic and a market sell-off. The image, which bore all the hallmarks of being generated by AI, was later debunked by authorities.

    But according to Jeffrey McGregor, the CEO of Truepic, it is “truly the tip of the iceberg of what’s to come.” As he put it, “We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it.”

    McGregor’s company is working to address this problem. Truepic offers technology that claims to authenticate media at the point of creation through its Truepic Lens. The application captures data including date, time, location and the device used to make the image, and applies a digital signature to verify if the image is organic, or if it has been manipulated or generated by AI.

    Truepic, which is backed by Microsoft, was founded in 2015, years before the launch of AI-powered image generation tools like Dall-E and Midjourney. Now McGregor says the company is seeing interest from “anyone that is making a decision based off of a photo,” from NGOs to media companies to insurance firms looking to confirm a claim is legitimate.

    “When anything can be faked, everything can be fake,” McGregor said. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.”

    Tech companies like Truepic have been working to combat online misinformation for years, but the rise of a new crop of AI tools that can quickly generate compelling images and written work in response to user prompts has added new urgency to these efforts. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared, shortly before he was indicted.

    Some lawmakers are now calling for tech companies to address the problem. Vera Jourova, vice president of the European Commission, on Monday called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users.”

    A growing number of startups and Big Tech companies, including some that are deploying generative AI technology in their products, are trying to implement standards and solutions to help people determine whether an image or video is made with AI. Some of these companies bear names like Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not.

    But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. Even OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”

    “This is about mitigation, not elimination,” Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, told CNN. “I don’t think it’s a lost cause, but I do think that there’s a lot that has to get done.”

    “The hope,” Farid said, is to get to a point where “some teenager in his parents basement can’t create an image and swing an election or move the market half a trillion dollars.”

    Companies are broadly taking two approaches to address the issue.

    One tactic relies on developing programs to identify images as AI-generated after they have been produced and shared online; the other focuses on marking an image as real or AI-generated at its conception with a kind of digital signature.

    Reality Defender and Hive Moderation are working on the former. With their platforms, users can upload existing images to be scanned and then receive an instant breakdown with a percentage indicating the likelihood for whether it’s real or AI-generated based on a large amount of data.

    Reality Defender, which launched before “generative AI” became a buzzword and was part of competitive Silicon Valley tech accelerator Y Combinator, says it uses “proprietary deepfake and generative content fingerprinting technology” to spot AI-generated video, audio and images.

    In an example provided by the company, Reality Defender highlights an image of a Tom Cruise deepfake as 53% “suspicious,” telling the user it has found evidence showing the face was warped, “a common artifact of image manipulation.”

    Defending reality could prove to be a lucrative business if the issue becomes a frequent concern for businesses and individuals. These services offer limited free demos as well as paid tiers. Hive Moderation said it charges $1.50 for every 1,000 images as well as “annual contract deals” that offer a discount. Realty Defender said its pricing may vary based on various factors, including whether the client needs “any bespoke factors requiring our team’s expertise and assistance.”

    “The risk is doubling every month,” Ben Colman, CEO of Reality Defender, told CNN. “Anybody can do this. You don’t need a PhD in computer science. You don’t need to spin up servers on Amazon. You don’t need to know how to write ransomware. Anybody can do this just by Googling ‘fake face generator.’”

    Kevin Guo, CEO of Hive Moderation, described it as “an arms race.”

    “We have to keep looking at all the new ways that people are creating this content, we have to understand it and add it to our dataset to then classify the future,” Guo told CNN. “Today it’s a small percent of content for sure that’s AI-generated, but I think that’s going to change over the next few years.”

    In a different, preventative approach, some larger tech companies are working to integrate a kind of watermark to images to certify media as real or AI-generated when they’re first created. The effort has so far largely been driven by the Coalition for Content Provenance and Authenticity, or C2PA.

    The C2PA was founded in 2021 to create a technical standard that certifies the source and history of digital media. It combines efforts by the Adobe-led Content Authenticity Initiative (CAI) and Project Origin, a Microsoft- and BBC-spearheaded initiative that focuses on combating disinformation in digital news. Other companies involved in C2PA include Truepic, Intel and Sony.

    Based on the C2PA’s guidelines, the CAI makes open source tools for companies to create content credentials, or the metadata that contains information about the image. This “allows creators to transparently share the details of how they created an image,” according to the CAI website. “This way, an end user can access context around who, what, and how the picture was changed — then judge for themselves how authentic that image is.”

    “Adobe doesn’t have a revenue center around this. We’re doing it because we think this has to exist,” Andy Parsons, Senior Director at CAI, told CNN. “We think it’s a very important foundational countermeasure against mis- and disinformation.”

    Many companies are already integrating the C2PA standard and CAI tools into their applications. Adobe’s Firefly, an AI image generation tool recently added to Photoshop, follows the standard through the Content Credentials feature. Microsoft also announced that AI art created by Bing Image Creator and Microsoft Designer will carry a cryptographic signature in the coming months.

    Other tech companies like Google appear to be pursuing a playbook that pulls a bit from both approaches.

    In May, Google announced a tool called About this image, offering users the ability to see when images found on its site were originally indexed by Google, where images might have first appeared and where else they can be found online. The tech company also announced that every AI-generated image created by Google will carry a markup in the original file to “give context” if the image is found on another website or platform.

    While tech companies are trying to tackle concerns about Ai-generated images and the integrity of digital media, experts in the field stress that these businesses will ultimately need to work with each other and the government to address the problem.

    “We’re going to need cooperation from the Twitters of the world and the Facebooks of the world so they start taking this stuff more seriously, and stop promoting the fake stuff and start promoting the real stuff,” said Farid. “There’s a regulatory part that we haven’t talked about. There’s an education part that we haven’t talked about.”

    Parsons agreed. “This is not a single company or a single government or a single individual in academia who can make this possible,” he said. “We need everybody to participate.”

    For now, however, tech companies continue to move forward with pushing more AI tools into the world.

    [ad_2]

    Source link

  • Sarah Silverman sues OpenAI and Meta alleging copyright infringement | CNN Business

    Sarah Silverman sues OpenAI and Meta alleging copyright infringement | CNN Business

    [ad_1]



    CNN
     — 

    Comedian Sarah Silverman and two authors are suing Meta and ChatGPT-maker OpenAI, alleging the companies’ AI language models were trained on copyrighted materials from their books without their knowledge or consent.

    The pair of lawsuits against OpenAI and Facebook-parent Meta were filed in a San Francisco federal court on Friday, and are both seeking class action status. Silverman, the author of “The Bedwetter,” is joined in filing the lawsuits by fellow authors Christopher Golden and Richard Kadrey.

    A new crop of AI tools has gained tremendous attention in recent months for their ability to generate written work and images in response to user prompts. The large language models underpinning these tools are trained on vast troves of online data. But this practice has raised some concerns that these models may be sweeping up copyrighted works without permission – and that these works could ultimately be served to train tools that upend the livelihoods of creatives.

    The complaint against OpenAI claims that “when ChatGPT is prompted, ChatGPT generates summaries of Plaintiffs’ copyrighted works—something only possible if ChatGPT was trained on Plaintiffs’ copyrighted works.” The authors “did not consent to the use of their copyrighted books as training material for ChatGPT,” according to the complaint.

    The complaint against Meta similarly claims that the company used the authors’ copyrighted books to train LLaMA, the set of large language models released by Meta in February. The suit claims that much of the material used to train Meta’s language models “comes from copyrighted works—including books written by Plaintiffs—that were copied by Meta without consent, without credit, and without compensation.”

    The suit against Meta also alleges that the company accessed the copyrighted books via an online “shadow library” website that includes a large quantity of copyrighted material.

    Meta declined to comment on the lawsuit. OpenAI did not immediately respond to a request for comment.

    The legal action from Silverman isn’t the first to focus on how large language models are trained. A separate lawsuit filed against OpenAI last month alleged the company misappropriated vast swaths of peoples’ personal data from the internet to train its AI tools. (OpenAI did not respond to a request for comment on the suit.)

    In May, OpenAI CEO Sam Altman appeared to acknowledge more needed to be done to address concerns from creators about how AI systems use their works.

    “We’re trying to work on new models where if an AI system is using your content, or if it’s using your style, you get paid for that,” he said at an event.

    [ad_2]

    Source link

  • Universal Music Group calls AI music a ‘fraud,’ wants it banned from streaming platforms. Experts say it’s not that easy | CNN Business

    Universal Music Group calls AI music a ‘fraud,’ wants it banned from streaming platforms. Experts say it’s not that easy | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Universal Music Group — the music company representing superstars including Sting, The Weeknd, Nicki Minaj and Ariana Grande — has a new Goliath to contend with: artificial intelligence.

    The music group sent urgent letters in April to streaming platforms, including Spotify

    (SPOT)
    and Apple Music, asking them to block artificial intelligence platforms from training on the melodies and lyrics of their copywritten songs.

    The company has “a moral and commercial responsibility to our artists to work to prevent the unauthorized use of their music and to stop platforms from ingesting content that violates the rights of artists and other creators,” a spokesperson from Universal Music Group, or UMG, told CNN. “We expect our platform partners will want to prevent their services from being used in ways that harm artists.”

    The move by UMG, first reported by the Financial Times, aims to stop artificial intelligence from creating an existential threat to the industry.

    Artificial intelligence, and specifically AI music, learns by either training on existing works on the internet or through a library of music given to the AI by humans.

    UMG says it is not against the technology itself, but rather AI that is so advanced it can recreate melodies and even musicians’ voices in seconds. That could possibly threaten UMG’s deep library of music and artists that generate billions of dollars in revenue.

    “UMG’s success has been, in part, due to embracing new technology and putting it to work for our artists — as we have been doing with our own innovation around AI for some time already,” UMG said in a statement Monday. “However, the training of generative AI using our artists’ music … begs the question as to which side of history all stakeholders in the music ecosystem want to be on.”

    The company said AI that uses artists’ music violates UMG’s agreements and copyright law. UMG has been sending requests to streamers asking them to take down AI-generated songs.

    “I understand the intent behind the move, but I’m not sure how effective this will be as AI services will likely still be able to access the copyrighted material one way or another,” said Karl Fowlkes, an entertainment and business attorney at The Fowlkes Firm.

    No regulations exist that dictate on what AI can and cannot train. But last month, in response to individuals looking to seek copyright for AI-generated works, the US Copyright Office released new guidance around how to register literary, musical, and artistic works made with AI.

    “In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of ‘mechanical reproduction’ or instead of an author’s ‘own original mental conception, to which [the author] gave visible form,’” the new guidance says.

    The copyright will be determined on a case-by-case basis, the guidance continued, based on how the AI tool operates and how it was used to create the final piece or work.

    The US Copyright Office announced it will also be seeking public input on how the law should apply to copywritten works the AI trains on, and how the office should treat those works.

    “AI companies using copyrighted works to train their models to create similar works is exactly the type of behavior the copyright office and courts should explicitly ban. Original art is meant to be protected by law, not works created by machines that used the original art to create new work,” said Fowlkes.

    But according to AI experts, it’s not that simple.

    “You can flag your site not to be searched. But that’s a request — you can’t prevent it. You can just request that someone not do it,” said Shelly Palmer, Professor of Advanced Media at Syracuse University.

    For example, a website can apply a robots.txt file that works like a guardrail to control which URL’s “search engine crawlers” can access a given site, according to Google. But it is not a full stop, keep-out option.

    Grammy-winning DJ and producer David Guetta proved in February just how easy it is to create new music using AI. Using ChatGPT for lyrics and Uberduck for vocals, Guetta was able to create a new song in an hour.

    The result was a rap with a voice that sounded exactly like Eminem. He played the song at one of his shows in February, but said he would never release it commercially.

    “What I think is very interesting about AI is that it’s raising a question of what is it to be an artist,” Guetta told CNN last month.

    Guetta believes AI is going to have a significant impact on the music industry, so he’s embracing it instead of fighting it. But he admits there are still questions about copyright.

    “That is an ethical problem that needs to be addressed because it sounds crazy to me that today I can type lyrics and it’s going to sound like Drake is rapping it, or Eminem,” he said.

    And that is exactly what UMG wants to avoid. The music group likens AI music to “deep fakes, fraud, and denying artists their due compensation.”

    “These instances demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists,” the UMG statement said.

    Music streamers Spotify, Apple Music and Pandora did not return request for comment.

    [ad_2]

    Source link

  • The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a few months in 2017, there were rumors that Sam Altman was planning to run for governor of California. Instead, he kept his day job as one of Silicon Valley’s most influential investors and entrepreneurs.

    But now, Altman is about to make a different kind of political debut.

    Altman, the CEO and co-founder of OpenAI, the artificial intelligence company behind viral chatbot ChatGPT and image generator Dall-E, is set to testify before Congress on Tuesday. His appearance is part of a Senate subcommittee hearing on the risks artificial intelligence poses for society, and what safeguards are needed for the technology.

    House lawmakers on both sides of the aisle are also expected to hold a dinner with Altman on Monday night, according to multiple reports. Dozens of lawmakers are said to be planning to attend, with one Republican lawmaker describing it as part of the process for Congress to assess “the extraordinary potential and unprecedented threat that artificial intelligence presents to humanity.”

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    The hearing and meetings come as ChatGPT has sparked a new arms race over AI. A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts. This week’s hearing may only cement his stature as a central player in AI’s rapid growth – and also add to scrutiny of him and his company.

    Those who know Altman have described him as a brilliant thinker, someone who makes prescient bets and has even been called “a startup Yoda.” In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    “If anyone knows where this is going, it’s Sam,” Brian Chesky, the CEO of Airbnb, wrote in a post about Altman for the latter’s inclusion this year on Time’s list of the 100 most influential people. “But Sam also knows that he doesn’t have all the answers. He often says, ‘What do you think? Maybe I’m wrong?’ Thank God someone with so much power has so much humility.”

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    OpenAI declined to make anyone available for an interview for this story.

    The success of ChatGPT may have brought Altman greater public attention, but he has been a well-known figure in Silicon Valley for years.

    Prior to cofounding OpenAI with Musk in 2015, Altman, a Missouri native, studied computer science at Stanford University, only to drop out to launch Loopt, an app that helped users share their locations with friends and get coupons for nearby businesses.

    In 2005, Loopt was part of the first batch of companies at Y Combinator, a prestigious tech accelerator. Paul Graham, who co-founded Y Combinator, later described Altman as “a very unusual guy.”

    “Within about three minutes of meeting him, I remember thinking ‘Ah, so this is what Bill Gates must have been like when he was 19,’” Graham wrote in a post in 2006.

    Loopt was acquired in 2012 for about $43 million. Two years later, Altman took over from Graham as president of Y Combinator. The position allowed Altman to connect him with numerous powerful figures in the tech industry. He remained at the helm of the accelerator until 2019.

    Margaret O’Mara, a tech historian and professor at the University of Washington, told CNN that Altman “has long been admired as a thoughtful, significant guy and in the remarkably small number of powerful people who are kind of at the top of tech and have a lot of sway.”

    During the Trump administration, Altman gained new attention as a vocal critic of the president. It was against that backdrop that he was rumored to be considering a run for California governor.

    Rather than running, however, Altman instead looked to back candidates who aligned with his values, which include lower cost of living, clean energy and taking 10% off the defense budget to give to research and development of future technology.

    Altman continues to push for some of these goals through his work in the private sector. He invested in Helion, a fusion research company that inked a deal with Microsoft last week to sell clean energy to the tech giant by 2028.

    Altman has also been a proponent of the idea of a universal basic income and has suggested that AI could one day help fulfill that goal by generating so much wealth it could be redistributed back to the public.

    As Graham told The New Yorker about Altman in 2016, “I think his goal is to make the whole future.”

    When launching OpenAI, Musk and Altman’s original mission was to get ahead of the fear that AI could harm people and society.

    “We discussed what is the best thing we can do to ensure the future is good?” Musk told the New York Times about a conversation with Altman and others before launching the company. “We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing A.I. in a way that is safe and is beneficial to humanity.”

    In an interview at the launch of OpenAI, Altman explained the company as his way of trying to steer the path of AI technology. “I sleep better knowing I can have some influence now,” he said.

    If there’s one thing AI enthusiasts and critics can agree on right now, it may be that Altman clearly has succeeded in having some influence over the rapidly evolving technology.

    Less than six months after the release of ChatGPT, it has become a household name, almost synonymous with AI itself. CEOs are using it to draft emails. Realtors are using it to write iistings and draft legal documents. The tool has passed exams from law and business schools – and been used to help some students cheat. And OpenAI recently released a more powerful version of the technology underpinning ChatGPT.

    Tech giants like Google and Facebook are now racing to catch up. Similar generative AI technology is quickly finding its way into productivity and search tools used by billions of people.

    A future that once seemed very far off now feels right around the corner, whether society is ready for it or not. Altman himself has professed not to be sure about how it will turn out.

    O’Mara said she believes Altman fits into “the techno-optimist school of thought that has been dominant in the Valley for a very long time,” which she describes as “the idea that we can devise technology that can indeed make the world a better place.”

    While Altman’s cautious remarks about AI may sound at odds with that way of thinking, O’Mara argues it may be an “extension” of it. In essence, she said, it’s related to “the idea that technology is transformative and can be transformative in a positive way but also has so much capacity to do so much that it actually could be dangerous.”

    And if AI should somehow help bring about the end of society as we know it, Altman may be more prepared than most to adapt.

    “I prep for survival,” he said in a 2016 profile of him in the New Yorker, noting several possible disaster scenarios, including “A.I. that attacks us.”

    “I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

    [ad_2]

    Source link

  • Amazon is trying to make it simpler to sift through thousands of user reviews | CNN Business

    Amazon is trying to make it simpler to sift through thousands of user reviews | CNN Business

    [ad_1]



    CNN
     — 

    Amazon is experimenting with using artificial intelligence to sum up customer feedback about products on the site, with the potential to cut down on the time shoppers spend sifting through reviews before making a purchase.

    On the Amazon product page for Apple’s third-generation AirPods, for example, the AI feature now sums up the more than 4,000 user ratings to note that the wireless headphones “have received positive feedback from customers regarding their sound quality and battery life.” But, it adds, “mixed opinions were also expressed about the performance, durability, fit, comfort, and value of the headphones.”

    The summary features the disclaimer: “AI-generated from the text of customer reviews.”

    “We are significantly investing in generative AI across all of our businesses,” Amazon said in a statement to CNN on Monday, referring to the technology that underpins services such as ChatGPT.

    The effort, first reported by CNBC, marks Amazon’s latest attempt to incorporate generative AI into its services and has the potential to help customers quickly determine the pros and cons of various products. But there are limits.

    For starters, the AI wording is not always intuitive. In the AirPods review, for example, the blurb says “all customers who mentioned stability had a negative opinion about it.”

    As with other generative AI tools, which are trained on vast troves of data online to come up with responses, there are also concerns about tone, accuracy and its potential to “hallucinate” details.

    “Given that generative AI is based on probability, mistakes are possible … and summaries may not be an accurate reflection of customer reviews,” said Reece Hayden, a senior analyst at market research firm ABI Research. “The possibility of hallucinations will be a worry for customers and merchants.”

    Hayden also questions whether the tool will be able to decipher fraudulent or bot-created reviews. “These reviews will be treated equally and therefore the summary may reflect fake, non-customer reviews,” Hayden said. (Amazon didn’t immediately respond to a request for comment on this possibility.)

    Amazon isn’t the only e-commerce company blending generative AI into the shopping experience. Some companies such as Shopify and Instacart are using the technology to help inform customers’ shopping decisions. Meanwhile, eBay recently rolled out an AI tool to help sellers generate product listing descriptions.

    Amazon CEO Andy Jassy said in a letter to shareholders in April that the company remains focused on “investing heavily” in the technology “across all of our consumer, seller, brand, and creator experiences.” The company is also reportedly working on adding ChatGPT-like search capabilities for its e-commerce store, and it’s rumored to be planning to use generative AI to bring conversational language to a home robot.

    Last month, Dave Limp, senior VP of devices and services, told CNN there is great interest in bringing generative AI to virtual assistant Alexa, so users can interact with the technology in a more fluid, natural way.

    [ad_2]

    Source link

  • Indian tech giant Wipro will invest $1 billion in AI, including training all staff | CNN Business

    Indian tech giant Wipro will invest $1 billion in AI, including training all staff | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Wipro, one of India’s top providers of software services, wants everyone on staff to know how to use artificial intelligence.

    The IT giant announced Wednesday it would spend $1 billion on improving its artificial intelligence capabilities over the next three years, including training its entire staff of 250,000 people across 66 countries in the fast-moving technology.

    Wipro

    (WIT)
    said it plans to run workshops “on AI fundamentals and responsible use of AI over the course of the next 12 months, and will continue to provide more customized, ongoing training for employees in AI-specialized roles.”

    Wipro is one of India’s biggest outsourcing firms, specializing in IT and consulting services. Its move comes as generative AI, the technology that underpins popular platforms such as ChatGPT, has taken the world by storm.

    “With the emergence of generative AI, we expect a fundamental shift up ahead, for all industries,” Wipro CEO Thierry Delaporte said in the statement.

    The company added it was launching a software system to integrate AI into every platform and tool used internally and offered to clients, as it capitalizes on its existing efforts in the space that started about a decade ago.

    Businesses are increasingly using AI to either bolster or replace tasks usually carried out by humans.

    This week, the CEO of an Indian startup made headlines for laying off about 90% of his support staff, saying the company had built an AI-powered chatbot that could process customer service requests faster than employees.

    [ad_2]

    Source link

  • FTC chair Lina Khan warns AI could ‘turbocharge’ fraud and scams | CNN Business

    FTC chair Lina Khan warns AI could ‘turbocharge’ fraud and scams | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Artificial intelligence tools such as ChatGPT could lead to a “turbocharging” of consumer harms including fraud and scams, and the US government has substantial authority to crack down on AI-driven consumer harms under existing law, members of the Federal Trade Commission said Tuesday.

    Addressing House lawmakers, FTC chair Lina Khan said the “turbocharging of fraud and scams that could be enabled by these tools are a serious concern.”

    In recent months, a new crop of AI tools have gained attention for their ability to generate convincing emails, stories and essays as well as images, audio and videos. While these tools have potential to change the way people work and create, some have also raised concerns about how they could be use to deceive by impersonating individuals.

    Even as policymakers across the federal government debate how to promote specific AI rules, citing concerns about possible algorithmic discrimination and privacy issues, companies could still face FTC investigations today under a range of statutes that have been on the books for years, Khan and her fellow commissioners said.

    “Throughout the FTC’s history we have had to adapt our enforcement to changing technology,” said FTC Commissioners Rebecca Slaughter. “Our obligation is to do what we’ve always done, which is to apply the tools we have to these changing technologies … [and] not be scared off by this idea that this is a new, revolutionary technology.”

    FTC Commissioner Alvaro Bedoya said companies cannot escape liability simply by claiming that their algorithms are a black box.

    “Our staff has been consistently saying our unfair and deceptive practices authority applies, our civil rights laws, fair credit, Equal Credit Opportunity Act, those apply,” said Bedoya. “There is law, and companies will need to abide by it.”

    The FTC has previously issued extensive public guidance to AI companies, and the agency last month received a request to investigate OpenAI over claims that the company behind ChatGPT has misled consumers about the tool’s capabilities and limitations.

    [ad_2]

    Source link