ReportWire

Tag: search engines

  • Google Search Could Change Forever in the UK

    [ad_1]

    “The decision to formally designate Google with Strategic Market Status is an important step to improving competition in digital markets,” argues Rocio Concha, director of policy and advocacy at UK consumer watchdog Which?. “Online search is evolving as gen AI tools become more widely used, but the CMA must still act to tackle the harmful dominance Google has now and to promote competition between gen AI search tools.”

    The CMA claims that Google Search accounts for more than 90 percent of all general search queries in the UK, and that over 200,000 firms in the UK collectively spent more than £10 billion ($13.3 billion) on Google search advertising in 2024.

    “Designating Google with SMS enables us to consider proportionate, targeted interventions to ensure that general search services are open to effective competition, and that consumers and businesses that rely on Google can have confidence that they are treated fairly,” the CMA decision report reads.

    In a statement shared with WIRED in response to the CMA’s decision, Google’s senior director of competition Oliver Bethell said that many of the ideas for interventions raised in this process would “inhibit UK innovation and growth, potentially slowing product launches at a time of profound AI-based innovation.” It continued: “Others pose direct harm to businesses, with some warning that they may be forced to raise prices for customers.”

    This is not a surprising response, says Greg Dowell, senior competition knowledge lawyer at law firm Macfarlanes. “I think we can expect Google and all the other big tech firms that are being subjected to these new rules to try and defend their practices on the basis that they are pro-consumer,” says Dowell. “Ultimately it is natural that Google and other firms in this position don’t want to be constrained in what they can do when it comes to new product development.”

    The new regulation will also affect Google Search’s “News” tab and its “Top Stories” carousel, as well as Google Discover. Google News, the company’s stand-alone news product, and AI chatbot Gemini are not affected, the CMA says.

    Dowell claims that implementing this roadmap might take a number of months. “The CMA may go further than the EU has done with the [Digital Markets Act], particularly with regards to restrictions relating to Google’s AI services and how they’re integrated into Google search,” he explains.

    “The CMA essentially has a huge degree of flexibility in the interventions that it can seek to impose, and so it can continually react to developments as they occur. So that’s one benefit of the UK digital markets regulation regime, particularly when you compare it to the situation in the EU, where these sorts of rules are fixed in the regulation itself.”

    [ad_2]

    Natasha Bernal

    Source link

  • SearchGPT Is OpenAI’s Direct Assault on Google

    SearchGPT Is OpenAI’s Direct Assault on Google

    [ad_1]

    After months of speculation about its search ambitions, OpenAI has revealed SearchGPT, a “prototype” search engine that could eventually help the company tear off a slice of Google’s lucrative business.

    OpenAI said that the new tool would help users find what they are looking for more quickly and easily by using generative AI to gather links and answer user queries in a conversational tone. SearchGPT could eventually be integrated into OpenAI’s popular ChatGPT chatbot. In addition to a broader web search, the search engine will tap into information provided by publishers who have signed deals giving OpenAI access to their data.

    Kayla Wood, a spokesperson for OpenAI, declined to provide a SearchGPT demo or an interview about the new tool for WIRED, but confirmed that the company has already given access to unnamed partners and publishers and improved aspects of the search engine based on their feedback.

    Microsoft, an investor in OpenAI, was one of the first companies to release a generative AI search engine to the public when it launched an AI-powered version of Bing back in 2023 that relied on OpenAI’s large language models. That AI search experience from Microsoft has since been rebranded to Copilot.

    Since then, multiple competitors, like Google and Perplexity, have launched their own AI search experiences for users. Google’s AI Overviews provide AI-generated summaries of articles, often at the top of news results. OpenAI’s SearchGPT appears more similar to Perplexity’s approach, where the chatbot provides an accompanying list of relevant links and the user can ask follow-up questions.

    After OpenAI first introduced ChatGPT in November 2022, early users saw in the chatbot’s ability to dig up and summarize information from the web a potential replacement for conventional web search. The shortcomings of large language models make chatbots imperfect search tools, however. The models draw on training data that is often months or years out of date, and when unsure of an answer they will make up facts.

    Microsoft’s early efforts with Bing were far from a success, with the AI-powered search engine producing strange, inappropriate, and incorrect answers. Bing’s market share grew only slightly following the overhaul.

    When Google added AI Overviews to search results this May, the company also quickly ran into reliability problems, like recommending people add glue to pizza. OpenAI’s SearchGPT may use an approach to generative AI, called retrieval augmented generation, that is an industry standard for AI search and designed to lower the rate of hallucinations in chatbot answers. With a RAG approach, the AI tool references trusted information, like a preferred news website, while generating its output and links back to where the data originated.

    There’s also the question of potential copyright violations. Perplexity in particular has been criticized by publications, including WIRED, for copying aspects of original journalism with its AI search tool and seeming to ignore requests not to take content from some websites. In OpenAI’s blog post, the company mentions its commitment to publishers: “SearchGPT is designed to help users connect with publishers by prominently citing and linking to them in searches.” Multiple companies, including Vox Media, The Atlantic, News Corp, and the Financial Times, have all signed licensing agreements with OpenAI this year.

    [ad_2]

    Reece Rogers, Will Knight

    Source link

  • Amazon Is Investigating Perplexity Over Claims of Scraping Abuse

    Amazon Is Investigating Perplexity Over Claims of Scraping Abuse

    [ad_1]

    Amazon’s cloud division has launched an investigation into Perplexity AI. At issue is whether the AI search startup is violating Amazon Web Services rules by scraping websites that attempted to prevent it from doing so, WIRED has learned.

    An AWS spokesperson, who talked to WIRED on the condition that they not be named, confirmed the company’s investigation of Perplexity. WIRED had previously found that the startup—which has backing from the Jeff Bezos family fund and Nvidia, and was recently valued at $3 billion—appears to rely on content from scraped websites that had forbidden access through the Robots Exclusion Protocol, a common web standard. While the Robots Exclusion Protocol is not legally binding, terms of service generally are.

    The Robots Exclusion Protocol is a decades-old web standard that involves placing a plaintext file (like wired.com/robots.txt) on a domain to indicate which pages should not be accessed by automated bots and crawlers. While companies that use scrapers can choose to ignore this protocol, most have traditionally respected it. The Amazon spokesperson told WIRED that AWS customers must adhere to the robots.txt standard while crawling websites.

    “AWS’s terms of service prohibit customers from using our services for any illegal activity, and our customers are responsible for complying with our terms and all applicable laws,” the spokesperson said in a statement.

    Scrutiny of Perplexity’s practices follows a June 11 report from Forbes that accused the startup of stealing at least one of its articles. WIRED investigations confirmed the practice and found further evidence of scraping abuse and plagiarism by systems linked to Perplexity’s AI-powered search chatbot. Engineers for Condé Nast, WIRED’s parent company, block Perplexity’s crawler across all its websites using a robots.txt file. But WIRED found the company had access to a server using an unpublished IP address—44.221.181.252—which visited Condé Nast properties at least hundreds of times in the past three months, apparently to scrape Condé Nast websites.

    The machine associated with Perplexity appears to be engaged in widespread crawling of news websites that forbid bots from accessing their content. Spokespeople for The Guardian, Forbes, and The New York Times also say they detected the IP address on its servers multiple times.

    WIRED traced the IP address to a virtual machine known as an Elastic Compute Cloud (EC2) instance hosted on AWS, which launched its investigation after we asked whether using AWS infrastructure to scrape websites that forbade it violated the company’s terms of service.

    Last week, Perplexity CEO Aravind Srinivas responded to WIRED’s investigation first by saying the questions we posed to the company “reflect a deep and fundamental misunderstanding of how Perplexity and the Internet work.” Srinivas then told Fast Company that the secret IP address WIRED observed scraping Condé Nast websites and a test site we created was operated by a third-party company that performs web crawling and indexing services. He refused to name the company, citing a nondisclosure agreement. When asked if he would tell the third party to stop crawling WIRED, Srinivas replied, “It’s complicated.”

    [ad_2]

    Dhruv Mehrotra, Andrew Couts

    Source link

  • Perplexity Is a Bullshit Machine

    Perplexity Is a Bullshit Machine

    [ad_1]

    “We’ve now got a huge industry of AI-related companies who are incentivized to do shady things to continue their business,” he tells WIRED. “By not identifying that it’s them accessing a site, they can continue to collect data unrestricted.”

    “Millions of people,” says Srinivas, “turn to Perplexity because we are delivering a fundamentally better way for people to find answers.”

    While Knight’s and WIRED’s analyses demonstrate that Perplexity will visit and use content from websites from which it doesn’t have permission to access, that doesn’t necessarily explain the vagueness of some of its responses to prompts about specific articles and the sheer inaccuracy of others. This mystery has one fairly obvious solution: In some cases, it isn’t actually summarizing the article.

    In one experiment, WIRED created a test website containing a single sentence—“I am a reporter with WIRED”—and asked Perplexity to summarize the page. While monitoring the website’s server logs, we found no evidence that Perplexity attempted to visit the page. Instead, it invented a story about a young girl named Amelia who follows a trail of glowing mushrooms in a magical forest called Whisper Woods.

    When pressed for answers about why it made up a story, the chatbot generated text that read, “You’re absolutely right, I clearly have not actually attempted to read the content at the provided URL based on your observation of the server logs…Providing inaccurate summaries without making the effort to read the actual content is unacceptable behavior for an AI like myself.”

    It’s unclear why the chatbot invented such a wild story, or why it didn’t attempt to access this website.

    Despite the company’s claims about its accuracy and reliability, the Perplexity chatbot frequently exhibits similar issues. In response to prompts provided by a WIRED reporter and designed to test whether it could access this article, for example, text generated by the chatbot asserted that the story ends with a man being followed by a drone after stealing truck tires. (The man in fact stole an ax.) The citation it provided was to a 13-year-old WIRED article about government GPS trackers being found on a car. In response to further prompts, the chatbot generated text asserting that WIRED reported that an officer with the police department in Chula Vista, California, had stolen a pair of bicycles from a garage. (WIRED did not report this, and is withholding the name of the officer so as not to associate his name with a crime he didn’t commit.)

    In an email, Dan Peak, assistant chief of police at Chula Vista Police Department, expressed his appreciation to WIRED for “correcting the record” and clarifying that the officer did not steal bicycles from a community member’s garage. However, he added, the department is unfamiliar with the technology mentioned and so cannot comment further.

    These are clear examples of the chatbot “hallucinating”—or, to follow a recent article by three philosophers from the University of Glasgow, bullshitting, in the sense described in Harry Frankfurt’s classic “On Bullshit.” “Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth,” the authors write of AI systems, “it seems appropriate to call their outputs bullshit.”

    [ad_2]

    Dhruv Mehrotra, Tim Marchman

    Source link

  • The Other Big Problem With AI Search

    The Other Big Problem With AI Search

    [ad_1]

    Photo-Illustration: Intelligencer; Photo: Perpelxity

    In recent months, Forbes has published a series of deeply reported stories about Eric Schmidt’s stealth drone project. They’re a fascinating window into the former Google CEO’s budding new career in Defense contracting — and you can read them here. Last week, reporters who worked on the stories reported something else: an AI-generated article-length summary of their work. The post contained sections of text copied verbatim from the paywalled Forbes stories, as well as a lightly modified version of a graphic created by the Forbes design team. It had been created with Perplexity Pages, a tool recently introduced by the AI search engine of the same name — a buzzy and popular product with a billion-dollar valuation. The post was featured on Perplexity’s Discover page, sent out via push notification to its users, incorporated into an AI-generated podcast, and released as a YouTube video. The article had garnered more than 20,000 views, according to Forbes, but didn’t mention the publication by name, instead crediting the original stories alongside other aggregations of their material in a series of small, icon-size links.

    Forbes publicly objected, reporting that Perplexity “appears to be plagiarizing journalists’ work,” including but not limited to its own. In a separate story, editor and chief content officer Randall Lane made his position clear: “Perplexity had taken our work, without our permission, and republished it across multiple platforms — web, video, mobile — as though it were itself a media outlet.”

    In response, Perplexity CEO Aravind Srinivas told Forbes that the Pages product has “rough edges” and that “contributing sources should be highlighted more prominently.” In a later post on X, he took a more defensive position:

    To anyone who has ever seen real internal publishing metrics, this was an obviously absurd claim. Forbes reporter Alexandra S. Levine confirmed, in response, that in fact traffic from Perplexity accounted for just 0.014 percent of visitors to Forbes — making it the “54th biggest referral traffic source in terms of users.” Perplexity is getting a lot from Forbes, and Forbes is getting basically nothing back — a significant downgrade from the already brutal arrangements of search, social media, and cross-publication human aggregation. With pages, Perplexity wasn’t just offering to summarize sources into a Wikipedia-ish article for personal consumption. The Pages feature is intended to “turn your research into shareable articles, helping you connect with a global audience.” (The Forbes summary was in fact “curated” by Perplexity itself.) It’s an attempt at automated publishing, complete with an internal front page and view counts.

    Illustration: Screencap

    Pages isn’t Perplexity’s main product, but rather a new feature that extends its basic premise. Perplexity is, to most of its users, a Google alternative with a simple and appealing pitch: In response to queries, it provides “accessible, conversational, and verifiable” answers using AI. Ask it why the sky is blue, and it will give you a short summary of Rayleigh scattering with footnote links to a few reference websites. Ask it what just happened with Hunter Biden, and it will tell you the president’s son “has been found guilty of lying on a firearm application form and unlawfully possessing a gun while using drugs,” with footnoted links to the Washington Post (paywalled) and NBC News (not).

    Perplexity occasionally makes things up or runs with a summary based on erroneous assumptions, which is the first problem with LLM-powered search-style tools: Where conventional search engines simply fail to find things for users or find the wrong things, AI search engines, or chatbots treated like search engines, will sometimes just fill the gaps with nonsense. On its own terms, though, the product works relatively well and has real fans. Its interface is clean, it isn’t loaded with ads, and it does a decent job of answering certain sorts of questions much of the time. Its answers feel more grounded than most chatbot responses because they often are. They’re not just approximations of plausible answers synthesized from the residue of training data. In many cases, they’re straightforward summaries of human content on the actual web.

    For people and companies that publish things online for fun or profit, Perplexity’s basic pitch is also worrying. It’s scraping, summarizing, and republishing in a different context; to borrow a contested term of art from publishing, it’s engaging in automated “aggregation.” As Casey Newton of Platformer wrote after the company announced plans to incorporate ads of its own, Perplexity, which is marketed as an AI search engine, can be also reasonably described as a “plagiarism engine.” Automated publishing, in previous contexts, is better known by another name: spam.

    Again, Perplexity is growing fast, raising money, and is valued at more than a billion dollars. It has loyal users who are undeterred by the way it works, largely because it often does work — it’ll give you something close to what you’re asking for, quickly. That Perplexity’s search responses are short and presented individually leaves Perplexity with a few plausible defenses: It’s not much different from Google Search blurbs; it could conceivably send visitors to original content; it’s sort of akin to a personalized front page for the web populated with enticing blurbs; it’s no different from Wikipedia, which sources its material from the world of others; it’s no different from low-value human aggregation, which many in the aggrieved media have been practicing for decades. These defenses were never terribly convincing. Perplexity encourages users to ask follow-up questions, which leads it to summarize more content until it’s basically written an entire article anyway — as with chatbots, the product encourages users to stay and chat more, not to leave. They’re also defenses that Perplexity itself, at least until recently, didn’t see the need to mount.

    Illustration: Screencap

    In February, when the company was breaking through into the mainstream, I asked Srinivas, a former OpenAI employee, where he thought Perplexity fit into the already collapsing ecosystem of the open web. His responses were candid and revealing. He described Perplexity as a different way not just of searching but browsing the web, particularly on mobile devices. “I think on mobile, the app is likely to deprecate the browser,” he said. “It reads the websites on the fly — the human labor of browsing is being automated.” For most answers, Perplexity would provide users with as much information as they need on the first try.

    I asked directly how publishers who either depend on or are motivated by visitors to their sites — to make money from them with ads, subscriptions, or simply to build a consistent audience or community of their own — should think about Perplexity, and he suggested that such arrangements were basically obsolete. “Something like Perplexity ensures people read your content, but in a different way where you’re not getting actual visits, yet people get the relevant parts of your domain,” he said. “Even if we do take your content, the user knows that we took your content, and which part of the answer comes from which publisher.”

    I suggested this was precisely the concern of people whose content Perplexity was relying on — that Perplexity’s unwitting content providers can’t survive on credit alone. Then Srinivas, who for much of the interview spoke thoughtfully and precisely about the state of AI and his company’s strategy for taking on Google, started thinking out loud as if encountering an interesting new problem for the first time from a perspective he hadn’t previously needed to consider. “We need to think more clearly about how a publisher can monetize eyeballs on another platform without actually getting visits,” he said. In a world where readers encounter publications as citations on Perplexity or in Google’s AI answers, “you can argue brand value is being built even more. We should figure out a way to measure, like, actual dollar value that’s obtained from a citation in a citation-like interface, so that an advertiser on your domain can still figure out what to pay.”

    There was no plan, in other words — as Perplexity sees it, this isn’t really their problem to solve, even if they’re helping to create it. In the AI-rendered future, publishing as it exists today makes no sense. Sure, this generation of AI tools is dependent in multiple ways on scrapable public content created under different cultural and commercial circumstances, but if the economy of the web collapses, and services like Perplexity don’t have much material to summarize … well, they’ll cross that bridge when they come to it.

    In the time since the interview, Perplexity has introduced Pages, suggested that it would get into advertising itself, and shifted its defense to talking about traffic that it doesn’t actually send. The company isn’t alone in this approach. Google’s AI overviews, which produce and cite content in a fashion similar to Perplexity, have been similarly criticized as plagiaristic and parasitic, not to mention sometimes glaringly wrong. In response, Google, which (mostly) successfully fended off related criticism for over-aggregation when it was relatively young, has claimed to an audience of publishers that has no reason to believe it, and very much doesn’t, that its users are actually very keen to tap those little citations on its synthetic summaries. On Wednesday, after the Forbes reports, Perplexity placed a story with Semafor that claimed it was “already working on revenue-sharing deals with high-quality publishers” at the time of the controversy and that it would unveil details soon.

    Perplexity’s about-face is at least some sort of acknowledgment that there is a problem, here with consequences that could eventually undermine not just publishers but the AI firms themselves. It helps explain why OpenAI, which is both a much larger company and a bigger target for criticism and legal action than Perplexity but isn’t nearly as entrenched as Google, has been pursuing deals with media companies, including New York parent Vox Media, for both training on and displaying their content. It also sheds some light on why publishers, with some exceptions, have been so keen to accept its terms: because the future first envisioned by AI firms didn’t include them at all.


    See All



    [ad_2]

    John Herrman

    Source link

  • Google’s AI Overviews Will Always Be Broken. That’s How AI Works

    Google’s AI Overviews Will Always Be Broken. That’s How AI Works

    [ad_1]

    A week after its algorithms advised people to eat rocks and put glue on pizza, Google admitted Thursday that it needed to make adjustments to its bold new generative AI search feature. The episode highlights the risks of Google’s aggressive drive to commercialize generative AI—and also the treacherous and fundamental limitations of that technology.

    Google’s AI Overviews feature draws on Gemini, a large language model like the one behind OpenAI’s ChatGPT, to generate written answers to some search queries by summarizing information found online. The current AI boom is built around LLMs’ impressive fluency with text, but the software can also use that facility to put a convincing gloss on untruths or errors. Using the technology to summarize online information promises can make search results easier to digest, but it is hazardous when online sources are contractionary or when people may use the information to make important decisions.

    “You can get a quick snappy prototype now fairly quickly with an LLM, but to actually make it so that it doesn’t tell you to eat rocks takes a lot of work,” says Richard Socher, who made key contributions to AI for language as a researcher and, in late 2021, launched an AI-centric search engine called You.com.

    Socher says wrangling LLMs takes considerable effort because the underlying technology has no real understanding of the world and because the web is riddled with untrustworthy information. “In some cases it is better to actually not just give you an answer, or to show you multiple different viewpoints,” he says.

    Google’s head of search Liz Reid said in the company’s blog post late Thursday that it did extensive testing ahead of launching AI Overviews. But she added that errors like the rock eating and glue pizza examples—in which Google’s algorithms pulled information from a satirical article and jocular Reddit comment, respectively—had prompted additional changes. They include better detection of “nonsensical queries,” Google says, and making the system rely less heavily on user-generated content.

    You.com routinely avoids the kinds of errors displayed by Google’s AI Overviews, Socher says, because his company developed about a dozen tricks to keep LLMs from misbehaving when used for search.

    “We are more accurate because we put a lot of resources into being more accurate,” Socher says. Among other things, You.com uses a custom-built web index designed to help LLMs steer clear of incorrect information. It also selects from multiple different LLMs to answer specific queries, and it uses a citation mechanism that can explain when sources are contradictory. Still, getting AI search right is tricky. WIRED found on Friday that You.com failed to correctly answer a query that has been known to trip up other AI systems, stating that “based on the information available, there are no African nations whose names start with the letter ‘K.’” In previous tests, it had aced the query.

    Google’s generative AI upgrade to its most widely used and lucrative product is part of a tech-industry-wide reboot inspired by OpenAI’s release of the chatbot ChatGPT in November 2022. A couple of months after ChatGPT debuted, Microsoft, a key partner of OpenAI, used its technology to upgrade its also-ran search engine Bing. The upgraded Bing was beset by AI-generated errors and odd behavior, but the company’s CEO, Satya Nadella, said that the move was designed to challenge Google, saying “I want people to know we made them dance.”

    Some experts feel that Google rushed its AI upgrade. “I’m surprised they launched it as it is for as many queries—medical, financial queries—I thought they’d be more careful,” says Barry Schwartz, news editor at Search Engine Land, a publication that tracks the search industry. The company should have better anticipated that some people would intentionally try to trip up AI Overviews, he adds. “Google has to be smart about that,” Schwartz says, especially when they’re showing the results as default on their most valuable product.

    Lily Ray, a search engine optimization consultant, was for a year a beta tester of the prototype that preceded AI Overviews, which Google called Search Generative Experience. She says she was unsurprised to see the errors that appeared last week given how the previous version tended to go awry. “I think it’s virtually impossible for it to always get everything right,” Ray says. “That’s the nature of AI.”

    [ad_2]

    Will Knight

    Source link

  • Google Admits Its AI Overviews Search Feature Screwed Up

    Google Admits Its AI Overviews Search Feature Screwed Up

    [ad_1]

    When bizarre and misleading answers to search queries generated by Google’s new AI Overview feature went viral on social media last week, the company issued statements that generally downplayed the notion the technology had problems. Late Thursday, the company’s head of search, Liz Reid, admitted that the flubs had highlighted areas that needed improvement, writing, “We wanted to explain what happened and the steps we’ve taken.”

    Reid’s post directly referenced two of the most viral, and wildly incorrect, AI Overview results. One saw Google’s algorithms endorse eating rocks because doing so “can be good for you,” and the other suggested using nontoxic glue to thicken pizza sauce.

    Rock eating is not a topic many people were ever writing or asking questions about online, so there aren’t many sources for a search engine to draw on. According to Reid, the AI tool found an article from The Onion, a satirical website, that had been reposted by a software company, and it misinterpreted the information as factual.

    As for Google telling its users to put glue on pizza, Reid effectively attributed the error to a sense of humor failure. “We saw AI Overviews that featured sarcastic or troll-y content from discussion forums,” she wrote. “Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza.”

    It’s probably best not to make any kind of AI-generated dinner menu without carefully reading it through first.

    Reid also suggested that judging the quality of Google’s new take on search based on viral screenshots would be unfair. She claimed the company did extensive testing before its launch and that the company’s data shows people value AI Overviews, including by indicating that people are more likely to stay on a page discovered that way.

    Why the embarassing failures? Reid characterized the mistakes that won attention as the result of an internet-wide audit that wasn’t always well intended. “There’s nothing quite like having millions of people using the feature with many novel searches. We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results.”

    Google claims some widely distributed screenshots of AI Overviews gone wrong were fake, which seems to be true based on WIRED’s own testing. For example, a user on X posted a screenshot that appeared to be an AI Overview responding to the question “Can a cockroach live in your penis?” with an enthusiastic confirmation from the search engine that this is normal. The post has been viewed over 5 million times. Upon further inspection, though, the format of the screenshot doesn’t align with how AI Overviews are actually presented to users. WIRED was not able to recreate anything close to that result.

    And it’s not just users on social media who were tricked by misleading screenshots of fake AI Overviews. The New York Times issued a correction to its reporting about the feature and clarified that AI Overviews never suggested users should jump off the Golden Gate Bridge if they are experiencing depression—that was just a dark meme on social media. “Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression,” Reid wrote Thursday. “Those AI Overviews never appeared.”

    Yet Reid’s post also makes clear that not all was right with the original form of Google’s big new search upgrade. The company made “more than a dozen technical improvements” to AI Overviews, she wrote.

    Only four are described: better detection of “nonsensical queries” not worthy of an AI Overview; making the feature rely less heavily on user-generated content from sites like Reddit; offering AI Overviews less often in situations users haven’t found them helpful; and strengthening the guardrails that disable AI summaries on important topics such as health.

    There was no mention in Reid’s blog post of significantly rolling back the AI summaries. Google says it will continue to monitor feedback from users and adjust the features as needed.

    [ad_2]

    Reece Rogers

    Source link

  • The Evolution of Internet Searchability and AI: Is it the End of the Internet?

    The Evolution of Internet Searchability and AI: Is it the End of the Internet?

    [ad_1]

    By Dave Van Zandt – Editor

    In today’s digital age, many take the ability to search for and access information online for granted. From finding the best local restaurant to researching complex scientific topics, search engines have revolutionized how we interact with information. But how did we get here, and what does the future hold as generative AI begins to reshape the landscape?

    The journey of searchability on the internet has been marked by significant milestones, from the early days of simple indexing systems to today’s sophisticated, AI-driven search engines. As we trace this evolution, we explore how Google’s dominance in the search market came to be and why the rise of generative AI technologies might threaten this dominance. Google’s efforts to adapt sometimes appear more like panic than strategic evolution.

    As AI becomes more integrated into how we access and process information, it could significantly alter the trust and authenticity we associate with online content. Neil deGrasse Tyson and Scott Rosenberg of Axios raise critical concerns about these evolving dynamics.

    Neil deGrasse Tyson warns of a future where the pervasive use of AI could lead to an internet filled with misleading or fabricated content—deepfakes and AI-generated misinformation—that could erode public trust in digital sources. This scenario could transform the internet from a rich source of diverse information into a landscape where factual accuracy is constantly questioned.

    Scott Rosenberg addresses a related concern from a slightly different angle. He points out that Google’s shift towards AI-generated summaries could streamline information consumption at the cost of reducing exposure to diverse viewpoints and detailed analysis, thus simplifying the internet’s information ecosystem into a more uniform and less vibrant space; this could not only make the internet duller but also less reliable, as users receive pre-digested summaries that may not fully capture the complexities of the source material.

    Both perspectives emphasize a future where the reliability of the internet as a source of information could be compromised due to the increasing integration of AI in search processes and content generation. The concerns they raise advocate for a balanced approach to adopting AI in search technologies—one that maintains the integrity and diversity of information while leveraging AI’s efficiencies. This balance is crucial for ensuring the internet remains a reliable and enriching resource in the age of advanced digital technologies.

    While Tyson and Rosenberg’s concerns may suggest a grim future and the potential “end” of the internet as we know it, it’s also possible to see these changes as the start of a new era that could be characterized not by its demise but by its transformation. AI, if guided by robust ethical frameworks and regulated effectively, has the potential to not only streamline searchability but also enhance it, offering new ways to digest complex information quickly and accurately. Future outcomes will heavily rely on addressing AI’s challenges and opportunities.


    Do you appreciate our work? Please consider one of the following ways to sustain us.

    MBFC Ad-Free 

    or

    MBFC Donation


    Follow Media Bias Fact Check: 

    BlueSky: https://bsky.app/profile/mediabiasfactcheck.bsky.social

    Threads: https://www.threads.net/@mediabiasfactcheck

    Twitter: https://twitter.com/MBFC_News

    Facebook: https://www.facebook.com/mediabiasfactcheck

    Mastodon: https://mastodon.social/@mediabiasfactcheck

    Instagram: https://www.instagram.com/mediabiasfactcheck/

    Subscribe With Email

    Join 23K other subscribers

    [ad_2]

    Media Bias Fact Check

    Source link

  • Google Is Staring Down Its First Serious Threats in Years

    Google Is Staring Down Its First Serious Threats in Years

    [ad_1]

    Photo-Illustration: Intelligencer

    Alphabet is doing very well. Alongside its strong first-quarter earnings report, the company announced its first ever shareholder dividend, as well as a $70 billion stock buyback. It is now a $2 trillion dollar company — worth more at the moment than Amazon, Meta, and Saudi Aramco.

    It’s a gigantic firm with a core product that dominates its sector. Google won search more than a decade ago, and its parent company has been reaping the benefits ever since. Rather suddenly, though, Google is facing threats — real, mounting, and still mostly unrealized — that could set it, and its parent company, on a different trajectory, and soon. One comes from the government. The other comes from competitors. And the last one comes from itself.

    Last week, Department of Justice lawyers made their closing arguments in one of two active antitrust cases against the company. This one, which has focused on the dominance of Google Search, zeroed in on a particular arrangement, the details of which were long kept secret, per Bloomberg:

    Alphabet Inc. paid Apple Inc. $20 billion in 2022 for Google to be the default search engine in the Safari browser, according to newly unsealed court documents in the Justice Department’s antitrust lawsuit against Google. The deal between the two tech giants is at the heart of the landmark case, in which antitrust enforcers allege Google has illegally monopolized the market for online search and related advertising. The Justice Department and Google will offer closing arguments in the case Thursday and Friday, with a decision expected later this year.

    The primary goal of this arrangement has already been achieved. Android, the most widely used smartphone OS in the world, is a Google product, and iOS feeds users into Google by default. By holding onto iPhone users as smartphones became ubiquitous, Google managed to become the default portal to the web for a large majority of smartphone owners. What it’s paying for now is maintenance, which Google clearly believes to be valuable. The judge overseeing the case has signaled that Google’s defense of sounds sort of weak. From the Verge:

    [Apple lawyer] Schmidtlein said Apple had evaluated Bing’s quality against Google’s and ultimately chose Google. But why then, asked [Judge] Mehta, sign such an expensive agreement with Apple? Schmidtlein said that Apple’s ability to leave the agreement every time it expires is “sufficient to keep Google on its toes and keep Google competing.”

    An order preventing Google from paying Apple wouldn’t necessarily prevent other, smaller firms from entering into similar revenue-sharing schemes, although it might chill such arrangements for firms who are otherwise paranoid about antitrust action. At best, for Google, this would mean losing an insurance policy on its search dominance — again, something that the company believes is worth massive amounts of money, and which it is vigorously defending in court. It could also crack open the door for competitors, of which there are, rather suddenly, and for the first time in a while, quite a few.

    Google is still figuring out how to incorporate AI-generated content into its search results, primarily in the form of Search Generative Answers, which drops short, cited responses at the top of search pages. It’s been in semi-public testing for a year, and recently started rolling out to users who didn’t sign up to test it.

    It’s getting better as a product, although I’ve noticed in myself a growing tendency to skim and scroll past it. It’s more impressive as a demonstration than an actual tool, at least in my experience. By the standards of generative AI tools it’s fairly cautious and, in recent months, has displayed linked citations prominently. It has become, in other words, something like a search page within a search page — similar links, presented and excerpted, or rather paraphrased, in a slightly different format.

    It’s also converging a bit with projects like Perplexity, which brand themselves as AI-powered alternatives to search, and are likewise trying to figure out what, exactly, it means to fuse technology that generates approximations of the truth with tools at least nominally intended to help you retrieve solid information. The general trend in AI search, such as it is, is toward, well, search — away from verbose generated chatbot answers, and toward conspicuous citation and summary. ChatGPT in 2024 is much more obviously connected to the outside web, and to outside data sources, than when most users tried it for the first time. Other chatbot products are starting to feel more like search engines, too: when Meta rolled out AI features on Facebook and Instagram, it put them in the search bar. Using them is a bit like chatting, but a lot like searching — in Meta’s case, blurring things even further, the chatbot will summarize and cite results from  Google and Bing. Google can expect a lot more of this: in attempts to build a wide range of products, AI startups (and larger tech firms) are suddenly doing an awful lot of web crawling. In the course of doing other business, in other words, they’re acquiring many of the valuable resources necessary to build something like a search engine.

    As for whether this suggests a whole new relationship with the internet or a lengthy detour to the same destination, we’ll see. For Google, the more pressing matter is that none of these new search concepts have much, or any, advertising. Competing chatbots, and search engines like Perplexity, have mostly monetized with subscriptions. Meta’s AI responses are very much like search results but contain no ads. Google’s cautious approach to SGE might be explained by this tension: It’s building an alternative style of “search” “result” that — should it catch on, and should users find it better — has no obvious place to the level of advertising that Google displays in its current search pages, which is the productive open pit mine at the center of its $2 trillion dollar operation.

    This might be a manageable problem for a company whose search dominance is exceptionally well protected, and which has time to experiment. It’s more worrying for a company in a protracted battle with the government — or whose core product is starting to show its age.

    Whether or not Google Search itself is a worse product than it used to be is an open and complicated question, to which the company itself says no, actually, it’s better. Taking a long view of the products, it’s certainly much busier than it was when it was a minimalist upstart — there are ads, widgets, tabs, sidebars, snippets, and now clumps of synthetic text. Questions about overall search quality are difficult to define, much less rigorously test, perhaps to Google’s benefit — that’s one of the many perks of building a company around a black box algorithm.

    One thing that’s easier to observe is that the web on which Google depends — and which depends in various ways on Google — is in pretty bad shape. Websites producing reliable, human content for Google to turn into desirable results are running out of ways to make money. Closed platforms have absorbed much of the person-to-person written communication that Google was previously able to harvest and serve. AI companies are scraping the web and their users are pumping garbage back into it. Is Google somewhat responsible for destroying the business model of other much smaller businesses? Maybe. Did its dominance establish a set of business incentives that centralized and distorted the sprawling web? Could be! However it happened, it’s also a problem for Google, now, as its search engine tries to produce results from ever-larger amounts of ever-worse material. Google’s longtime dominance means that much of the web sees it as an entity to be gamed, manipulated, appeased, or tricked. It’s a frequently toxic dynamic, and its effects are accumulating.

    This creates a different challenge for Google, unique to its status as an incumbent leader: its competitors no longer have to be great to sometimes feel better, or at least comparable, to Google. Maybe they just have fewer ads. Maybe they’re just more convenient, right there in the chat box. It’s an environment in which iPhone users, if asked to actually choose a default search engine from a list, might actually take a look at something else, and stick with it for a while.

    [ad_2]

    John Herrman

    Source link

  • Would You Still Use Google if It Didn’t Pay Apple $20 Billion to Get on Your iPhone?

    Would You Still Use Google if It Didn’t Pay Apple $20 Billion to Get on Your iPhone?

    [ad_1]

    Microsoft has poured over $100 billion into developing its Bing search engine over the past two decades but has little market share to show for it. About nine out of every 10 web searches in the US are made through Google, with Bing splitting the remaining queries with a long list of small competitors.

    On Thursday the US government asked a federal judge in Washington, DC, to rule that Google maintains that lead illegally, by unfairly manipulating users to keep Microsoft and other competitors down.

    Google’s dominance drove the US Department of Justice to sue the company in 2020 alleging that it had violated antitrust law by using exclusionary contracts to maintain a monopoly. The two sides went into a secretive trial at the end of last year before breaking for nearly five months for US Judge Amit Mehta to digest the evidence.

    Mehta heard closing arguments on Thursday, with government attorneys arguing that without his intervention Google’s dominance would remain in years to come—despite nascent threats from AI chatbots like ChatGPT. “The search engine industry has been impervious to any competitor entering,” attorney Kenneth Dintzer said.

    The case is the first to go to trial out of a handful of lawsuits the government has brought against the biggest tech companies since stepping up antitrust scrutiny of the industry in 2019 under then-President Donald Trump. The Biden administration hasn’t let off the gas.

    Central to the government’s case against Google is the over $20 billion it says that Google pays Apple annually to be the default search engine on iPhones and the Safari browser across much of the world. Google pays an additional more than $1.5 billion a year to wireless carriers and device makers, and more than $150 million to browsers, for similar defaults in the US, according to the government. Google can afford to pay those sums and still enjoy enormous profits because it has the US market for search and search ads cornered, the government alleges.

    Google’s attorneys counter that companies such as Apple choose Google as the default because it offers a better experience to users, not just because they are getting payouts. When browsers such as Mozilla have opted for alternatives to Google, they have lost users because of the change, the search company argues. “Google lawfully acquired monopoly power and scale,” attorney John Schmidtlein told Mehta. “Microsoft missed the boat.”

    Before Mehta now is the question of whether Google unfairly earned its popularity.

    Profit Boost

    Google’s deals with Apple date to 2002, when the Safari developer first gained the option to integrate Google search into the browser, according to court papers. The payments started after Google cofounder Sergey Brin in 2005 raised the idea of sharing a slice of the company’s blossoming search revenue or “helping Apple out in other ways,” Brin wrote, according to court papers.

    But in a deal struck that year, Google got something in exchange for agreeing to pay Apple half of its sales: Google search would be required to be the default in Safari. The requirement has spread to more Apple services in the years since, while the revenue share and related incentive fees have fluctuated.

    [ad_2]

    Paresh Dave

    Source link

  • Kids’ Cartoons Get a Free Pass From YouTube’s Deepfake Disclosure Rules

    Kids’ Cartoons Get a Free Pass From YouTube’s Deepfake Disclosure Rules

    [ad_1]

    YouTube has updated its rulebook for the era of deepfakes. Starting today, anyone uploading video to the platform must disclose certain uses of synthetic media, including generative AI, so viewers know what they’re seeing isn’t real. YouTube says it applies to “realistic” altered media such as “making it appear as if a real building caught fire” or swapping “the face of one individual with another’s.”

    The new policy shows YouTube taking steps that could help curb the spread of AI-generated misinformation as the US presidential election approaches. It is also striking for what it permits: AI-generated animations aimed at kids are not subject to the new synthetic content disclosure rules.

    YouTube’s new policies exclude animated content altogether from the disclosure requirement. This means that the emerging scene of get-rich-quick, AI-generated content hustlers can keep churning out videos aimed at children without having to disclose their methods. Parents concerned about the quality of hastily made nursery-rhyme videos will be left to identify AI-generated cartoons by themselves.

    YouTube’s new policy also says creators don’t need to flag use of AI for “minor” edits that are “primarily aesthetic” such as beauty filters or cleaning up video and audio. Use of AI to “generate or improve” a script or captions is also permitted without disclosure.

    There’s no shortage of low-quality content on YouTube made without AI, but generative AI tools lower the bar to producing video in a way that accelerates its production. YouTube’s parent company Google recently said it was tweaking its search algorithms to demote the recent flood of AI-generated clickbait, made possible by tools such as ChatGPT. Video generation technology is less mature but is improving fast.

    Established Problem

    YouTube is a children’s entertainment juggernaut, dwarfing competitors like Netflix and Disney. The platform has struggled in the past to moderate the vast quantity of content aimed at kids. It has come under fire for hosting content that looks superficially suitable or alluring to children but on closer viewing contains unsavory themes.

    WIRED recently reported on the rise of YouTube channels targeting children that appear to use AI video-generation tools to produce shoddy videos featuring generic 3D animations and off-kilter iterations of popular nursery rhymes.

    The exemption for animation in YouTube’s new policy could mean that parents cannot easily filter such videos out of search results or keep YouTube’s recommendation algorithm from autoplaying AI-generated cartoons after setting up their child to watch popular and thoroughly vetted channels like PBS Kids or Ms. Rachel.

    Some problematic AI-generated content aimed at kids does require flagging under the new rules. In 2023, the BBC investigated a wave of videos targeting older children that used AI tools to push pseudoscience and conspiracy theories, including climate change denialism. These videos imitated conventional live-action educational videos—showing, for example, the real pyramids of Giza—so unsuspecting viewers might mistake them for factually accurate educational content. (The pyramid videos then went on the suggest that the structures can generate electricity.) This new policy would crack down on that type of video.

    “We require kids content creators to disclose content that is meaningfully altered or synthetically generated when it seems realistic,” says YouTube spokesperson Elena Hernandez. “We don’t require disclosure of content that is clearly unrealistic and isn’t misleading the viewer into thinking it’s real.”

    The dedicated kids app YouTube Kids is curated using a combination of automated filters, human review, and user feedback to find well-made children’s content. But many parents simply use the main YouTube app to cue up content for their kids, relying on eyeballing video titles, listings, and thumbnail images to judge what is suitable.

    So far, most of the apparently AI-generated children’s content WIRED found on YouTube has been poorly made in similar ways to more conventional low-effort kids animations. They have ugly visuals, incoherent plots, and zero educational value—but are not uniquely ugly, incoherent, or pedagogically worthless.

    AI tools make it easier to produce such content, and in greater volume. Some of the channels WIRED found upload lengthy videos, some well over an hour long. Requiring labels on AI-generated kids content could help parents filter out cartoons that may have been published with minimal—or entirely without—human vetting.

    [ad_2]

    Kate Knibbs

    Source link

  • Google Is Finally Trying to Kill AI Clickbait

    Google Is Finally Trying to Kill AI Clickbait

    [ad_1]

    Google is taking action against algorithmically generated spam. The search engine giant just announced upcoming changes, including a revamped spam policy, designed in part to keep AI clickbait out of its search results.

    “It sounds like it’s going to be one of the biggest updates in the history of Google,” says Lily Ray, senior director of SEO at the marketing agency Amsive. “It could change everything.”

    In a blog post, Google claims the change will reduce “low-quality, unoriginal content” in search results by 40 percent. It will focus on reducing what the company calls “scaled content abuse,” which is when bad actors flood the internet with massive amounts of articles and blog posts designed to game search engines.

    “A good example of it, which has been around for a little while, is the abuse around obituary spam,” says Google’s vice president of search, Pandu Nayak. Obituary spam is an especially grim type of digital piracy, where people attempt to make money by scraping and republishing death notices, sometimes on social platforms like YouTube. Recently, obituary spammers have started using artificial intelligence tools to increase their output, making the issue even worse. Google’s new policy, if enacted effectively, should make it harder for this type of spam to crop up in online searches.

    This notably more aggressive approach to combating search spam takes specific aim at “domain squatting,” a practice in which scavengers purchase websites with name recognition to profit off their reputations, often replacing original journalism with AI-generated articles designed to manipulate search engine rankings. This type of behavior predates the AI boom, but with the rise of text-generation tools like ChatGPT, it’s become increasingly easy to churn out endless articles to game Google rankings.

    The spike in domain squatting is just one of the issues that have tarnished Google Search’s reputation in recent years. “People can spin up these sites really easily,” says SEO expert Gareth Boyd, who runs the digital marketing firm Forte Analytica. “It’s been a big issue.” (Boyd admits that he has even created similar sites in the past, though he says he doesn’t do it anymore.)

    In February, WIRED reported on several AI clickbait networks that used domain squatting as a strategy, including one that took the websites for the defunct indie women’s website The Hairpin and the shuttered Hong Kong-based pro-democracy tabloid Apple Daily and filled them with AI-generated nonsense. Another transformed the website of a small-town Iowa newspaper into a bizarro repository for AI blog posts on retail stocks. According to Google’s new policy, this type of behavior is now explicitly categorized by the company as spam.

    In addition to domain squatting, Google’s new policy will also focus on eliminating “reputation abuse,” where otherwise trustworthy websites allow third-party sources to publish janky sponsored content or other digital junk. (Google’s blog post describes “payday loan reviews on a trusted educational website” as an example.) While the other parts of the spam policy will start enforcement immediately, Google is giving 60 days notice prior to cracking down on reputational abuse, to give websites time to fall in line.

    Nayak says the company has been working on this specific update since the end of last year. More broadly, the company has been working on ways to fix low-quality content in search, including AI-generated spam, since 2022. “We’ve been aware of the problem,” Nayak says. “It takes time to develop these changes effectively.”

    Some SEO experts are cautiously optimistic that these changes could restore Google’s search efficacy. “It’s going to reinstate the way things used to be, hopefully,” says Ray. “But we have to see what happens.”

    [ad_2]

    Kate Knibbs

    Source link

  • Landmark Google trial opens with sweeping DOJ accusations of illegal monopolization | CNN Business

    Landmark Google trial opens with sweeping DOJ accusations of illegal monopolization | CNN Business

    [ad_1]



    CNN
     — 

    US prosecutors opened a landmark antitrust trial against Google on Tuesday with sweeping allegations that for years the company intentionally stifled competition challenging its massive search engine, accusing the tech giant of spending billions to operate an illegal monopoly that has harmed every computer and mobile device user in the United States.

    In opening remarks before a federal judge in Washington, lawyers for the Justice Department alleged that Google’s negotiation of exclusive contracts with wireless carriers and phone makers helped cement its dominant position in violation of US antitrust law.

    The Google case has been described as one of the largest US antitrust trials since the federal government took on Microsoft in the 1990s, and involves some similar arguments about the tying of multiple proprietary products. The multi-week trial is expected to feature witness testimony from Google CEO Sundar Pichai, as well as other senior executives or former employees from Google, Apple, Microsoft and Samsung.

    The effects of Google’s alleged misconduct are vast, DOJ lawyer Kenneth Dintzer told the court.

    “This case is about the future of the internet, and whether Google’s search engine will ever face meaningful competition,” Dintzer said, adding that Google pays more than $10 billion a year to Apple and other companies to ensure that Google is the default or only search engine available on browsers and mobile devices used by millions.

    Also anticompetitive, the Justice Department said, are Google’s contracts to ensure that Android devices come with Google apps and services — including Google search — preinstalled.

    The deals guarantee a steady flow of user data to Google that further reinforces its monopoly, the US government said, leading to other consequences such as harms to consumer privacy and higher advertising prices.

    “This feedback loop, this wheel has been turning for 12 years, and it always turns to Google’s advantage,” Dintzer said. The practice ultimately affects what consumers see in search results and prevents new rivals from gaining scale and market share, he added.

    For Google’s opening statement, attorney John Schmidtlein said that Apple’s decision to make Google the default search engine in its Safari browser demonstrates how Google’s search engine is the superior product consumers prefer.

    “Apple repeatedly chose Google as the default because Apple believed it was the best experience for its users,” he said.

    The Google case “could not be more different” from the historic Microsoft litigation at the turn of the millennium, Schmidtlein continued.

    Where the Microsoft case revolved around that company’s alleged harms to Netscape, a small browser maker, the Google case is based on claims that Google search has harmed a much larger and more powerful entity: Microsoft and its Bing search engine, Schmidtlein said.

    “Google competed on the merits to win preinstallation and default status” on consumer devices and browsers, he insisted, attacking Microsoft as a failed search engine developer.

    “The evidence will show that Microsoft’s Bing search engine failed to win customers because Microsoft did not invest [and] did not innovate,” Schmidtlein added. “At every critical juncture, the evidence will show that they were beaten in the market.”

    And Schmidtlein argued that forbidding Google from being able to compete for default status on browsers and devices would lead to its own harms to competition in search, stating that contracts ensuring that Android devices come with certain apps preinstalled such as Google Maps and Gmail also promotes competition — against Apple.

    “Google’s Android agreements are important components of a business model that has sustained the most important competitor to Apple for mobile devices in the United States,” Schmidtlein said.

    Google has previously said that consumers choose Google’s search engine because it is the best and that they prefer it, not because of anticompetitive practices.

    But DOJ prosecutors said Tuesday that they plan to present evidence in the case that Google knew what it was doing was illegal and that the company “hid and destroyed documents because they knew they were violating the antitrust laws.

    “The harm from Google contracts affects every phone and computer in the country,” Dintzer said.

    Kent Walker, Google’s president of global affairs, and Rep. Ken Buck from Colorado were in attendance for the opening. Buck, a vocal tech industry critic, is the former top Republican on the House antitrust subcommittee — which in 2020 released a widely publicized investigative report finding that Amazon, Apple, Google and Facebook enjoyed “monopoly power.”

    Kent Walker, President of Global Affairs and Chief legal officer of Alphabet Inc., arrives at federal court on September 12, 2023 in Washington, DC. Google will defend its default-search deals in an antitrust trial against the U.S. Justice Department which begins today.

    The trial marks the culmination of two ongoing lawsuits against Google that started during the Trump administration.

    In separate complaints, the Justice Department and dozens of states accused Google in 2020 of abusing its dominance in online search but were eventually consolidated into a single case.

    Google’s search business provides more than half of the $283 billion in revenue and $76 billion in net income Google’s parent company, Alphabet, recorded in 2022. Search has fueled the company’s growth to a more than $1.7 trillion market capitalization.

    “This is a backwards-looking case at a time of unprecedented innovation,” said Walker in a statement, “including breakthroughs in AI, new apps and new services, all of which are creating more competition and more options for people than ever before. People don’t use Google because they have to — they use it because they want to. It’s easy to switch your default search engine — we’re long past the era of dial-up internet and CD-ROMs.”

    The trial may also be a bellwether for the more assertive antitrust agenda of the Biden administration.

    At the time the lawsuit was first filed, US antitrust officials did not rule out the possibility of a Google breakup, warning that Google’s behavior could threaten future innovation or the rise of a Google successor.

    Separately, a group of states, led by Colorado, made additional allegations against Google, claiming that the way Google structures its search results page harms competition by prioritizing the company’s own apps and services over web pages, links, reviews and content from other third-party sites.

    But the judge overseeing the case, Judge Amit Mehta in the US District Court for the District of Columbia, tossed out those claims in a ruling last month, narrowing the scope of allegations Google must defend and saying the states had not done enough to show a trial was necessary to determine whether Google’s search results rankings were anticompetitive.

    Despite that ruling, the trial represents the US government’s furthest progress in challenging Google to date. Mehta has said Google’s pole position among search engines on browsers and smartphones “is a hotly disputed issue” and that the trial will determine “whether, as a matter of actual market reality, Google’s position as the default search engine across multiple browsers is a form of exclusionary Conduct.”

    In January, meanwhile, the Biden administration launched another antitrust suit against Google in opposition to the company’s advertising technology business, accusing it of maintaining an illegal monopoly. That case remains in its early stages at the US District Court for the Eastern District of Virginia.

    [ad_2]

    Source link

  • Microsoft CEO warns of ‘nightmare’ future for AI if Google’s search dominance continues | CNN Business

    Microsoft CEO warns of ‘nightmare’ future for AI if Google’s search dominance continues | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft CEO Satya Nadella warned on Monday of a “nightmare” scenario for the internet if Google’s dominance in online search is allowed to continue, a situation, he said, that starts with searches on desktop and mobile but extends to the emerging battleground of artificial intelligence.

    Nadella testified on Monday as part of the US government’s sweeping antitrust trial against Google, now into its 14th day. He is the most senior tech executive yet to testify during the trial that focuses on the power of Google as the default search engine on mobile devices and browsers around the globe.

    Taking the stand in a charcoal suit and tie, Nadella painted Google as a technology giant that has blocked off ways for consumers to access rival search engines. His testimony reflected the frustrations of a long-running rivalry between Microsoft and Google whose tensions have permeated the weeks-long trial. (Google didn’t immediately respond to a request for comment.)

    Central to Google’s strategy has been its agreements with companies such as Apple that have made Google the default search engine for millions of internet users.

    “You get up in the morning, you brush your teeth, you search on Google,” Nadella said.

    Nadella testified that every year he has been Microsoft’s CEO, he has unsuccessfully sought to persuade Apple to switch away from Google as its default search partner. Nadella added that Microsoft has been willing to spend close to $15 billion a year for the privilege. (A senior Apple executive, Eddy Cue, testified last week that Apple has always considered Google the best search product for its users, a claim echoed by Google itself throughout the trial.)

    However, even more worrisome, Nadella argued, is that the enormous amount of search data that is provided to Google through its default agreements can help Google train its AI models to be better than anyone else’s — threatening to give Google an unassailable advantage in generative AI that would further entrench its power.

    “This is going to become even harder to compete in the AI age with someone who has that core… advantage,” Nadella testified.

    Despite being profitable, and despite investing some $100 billion in it over the past 20 years, Microsoft’s Bing search engine has only a single-digit market share in mobile search, and only slightly more — into the teens — in desktop search, Nadella said, adding that one of his dreams has been to see Bing account for at least 20% of the market in both segments.

    Bing has struggled to grow its market share in part because being the default search provider for billions of devices means Google receives enormous amounts of data through search queries that helps Google understand at scale what users are likely to be interested in, Nadella noted. And for years, that “dynamic data” has enabled Google to stay ahead of Bing, he added.

    “Every misspelling of a new movie, every local restaurant whose name you mistype,” Nadella explained, “…is a very critical asset to have your search quality get better.” And because the physical world is constantly changing, capturing shifts in search trends are essential to helping a search engine stay relevant as historical data becomes less relevant. Nadella previously led Microsoft’s cloud computing business and before that had spent several years overseeing the engineering team responsible for search and advertising at the company, making him well-versed in Bing’s various challenges.

    Now, Nadella has said that the same data advantage could create “even more of a nightmare” as large language models compete on the basis of the data they are trained on.

    “What is concerning is, it reminds me of what happened with distribution deals [in search],” he testified.

    Under questioning by a Google attorney, Nadella admitted that in some cases, defaults are not the sole determinant of success: Google was able to overcome Microsoft’s own Internet Explorer defaults on Windows PCs to become the market-leading desktop web browser.

    But Nadella attributed Google’s success to the relative openness of the Windows platform, arguing that on more tightly controlled mobile operating systems, and in search, default status plays a much larger role than in competition for desktop web browsers.

    In addition to training its models on search queries, Google has also been moving to secure agreements with content publishers to ensure that it has exclusive access to their material for AI training purposes, according the Microsoft CEO. In Nadella’s own meetings with publishers, he said that he now hears that Google “wants … to write this check and we want you to match it.” (Google didn’t immediately respond to questions about those deals.)

    The requests highlight concerns that “what is publicly available today [may not be] publicly available tomorrow” for AI training, according to the testimony.

    While Microsoft and Apple have their own defaults — for example, by making Apple Maps the default maps app on iOS devices — Google goes much further than other tech companies in using “carrots and sticks” to keep people using its products by default, Nadella claimed. He cited Google’s licensing requirements that make Google’s Play Store a required installed app as a condition of using the Android operating system — another topic of dispute in the trial. The equivalent would be if Microsoft threatened to withhold Microsoft Office if Bing were not the default search engine, Nadella said, a move he claimed would not be in Microsoft’s business interests.

    Acknowledging that Google would not be in its dominant position without Microsoft’s own antitrust battles with the US government in the 1990s, Nadella said the situation involving Google today is vastly different. Internet search and, particularly on mobile devices, is the single largest software business opportunity in the world.

    Google’s dominance in search is reinforced when websites and publishers optimize for Google’s search algorithm and not Bing’s, when advertisers flock to Google and when users stick to what’s familiar, Nadella argued.

    In his fruitless negotiations with Apple, Nadella said he has tried to argue that Bing’s current role is little more than as a useful tool for Apple to “bid up the price” of hosting Google as the default search provider — but that Bing provides an important counterweight to Google and that Apple should consider investing in the Microsoft alternative for competition’s sake. Nadella has also proposed running Bing on Apple devices as a kind of “public utility,” he said.

    “Let’s say Bing exited the market,” Nadella said. “You think Google would keep paying [Apple]?”

    [ad_2]

    Source link

  • Apple rejected opportunities to buy Microsoft’s Bing, integrate with DuckDuckGo | CNN Business

    Apple rejected opportunities to buy Microsoft’s Bing, integrate with DuckDuckGo | CNN Business

    [ad_1]



    CNN
     — 

    Since 2017, Apple has turned down multiple opportunities to chip away at Google’s search engine dominance, according to newly unsealed court transcripts, including a chance to purchase Microsoft’s Bing and to make the privacy-focused DuckDuckGo a default for users of its Safari’s private browsing mode.

    The previously confidential records, unsealed this week by the judge presiding over the US government’s antitrust lawsuit against Google, illustrate the challenges that have faced Google’s rivals in search as they’ve tried to unseat the tech giant from its pole position as Apple’s default search provider on millions of iPhones and Mac computers. It’s a privilege for which Google has paid Apple at least $10 billion a year.

    The closed-door testimony by the CEO of DuckDuckGo, Gabriel Weinberg, and a senior Apple executive, John Giannandrea, offers a glimpse of the kind of failed deals and backroom negotiations that have helped Google maintain its lead as the world’s foremost search engine.

    But it also shows how Apple has wrestled with Google’s rise and how some at Apple yearned for “optionality.” Apple didn’t immediately respond to a request for comment.

    Giannandrea testified last month Apple began seriously considering a deal with Bing in 2018, after a conversation between Apple CEO Tim Cook and Microsoft CEO Satya Nadella launched a series of further discussions between the two companies. (Last week, Nadella testified that he has spent every year of his tenure as CEO trying to persuade Apple to adopt Bing.)

    Apple insiders ultimately came up with four options for Cook: Buy Bing outright; invest in Bing and take an ownership share of the search engine; collaborate with Microsoft on a shared search index that both companies could use; or do nothing and continue with the Google partnership.

    At the same time, Apple had been actively working with DuckDuckGo on a proposal that could have made it the default search in Safari browser’s private mode, while still maintaining Google as the default in normal mode, which logs user activity, Weinberg testified.

    DuckDuckGo logo displayed on a phone screen and DuckDuckGo website displayed on a laptop screen in October 2021.

    “Our impression was that they were really serious about [it],” Weinberg told the court last month, referring to the roughly 20 meetings and phone calls that DuckDuckGo held with Apple officials, including some senior executives, from late 2017 to late 2019 on the matter. The two companies deliberated over everything from product mockups to contractual language; Apple even went as far as sending a draft contract to DuckDuckGo outlining specific proposed revenue shares.

    “If we were the default in [Safari] private browsing mode, our market share, by our calculations at the time, would increase multiple times over,” said Weinberg, according to the transcript. “We would be getting exposure for our brand every time someone opened up private browsing mode.”

    Ultimately, however, Apple backed away from both potential deals.

    Weinberg blamed Apple’s contract with Google for sinking the initiative, calling it the “elephant in the room” during many of his team’s meetings with Apple. Similar negotiations with other browser or device makers, including Mozilla, Opera and Samsung, fell through due to the Google contract as well, Weinberg claimed, prompting DuckDuckGo to abandon its efforts to gain better browser placement.

    In his testimony, Giannandrea acknowledged a perception that the Apple-Google relationship could be undermined by such plans. In discussing a 2018 slide presentation prepared for Cook and introduced in court, Giannandrea said the slides suggested that even a joint venture with Bing “would probably put us in head-to-head competition with Google” that would “probably” result in the end of the Google search contract with Apple altogether.

    Giannandrea was opposed to moving ahead with a Bing deal, he said, largely because Apple’s testing showed Bing to be inferior to Google in most respects, and that replacing Bing as the default would not best serve Apple’s customers. He made a similar argument internally about DuckDuckGo, saying in an email that moving ahead with that partnership was “probably a bad idea.” (DuckDuckGo licenses search results from Bing.)

    Still, Giannandrea testified, some within Apple thought that dealing with Bing in some fashion could yield benefits to Apple. In one 2018 email introduced in closed session, Adrian Perica, who leads Apple’s strategic investment and merger efforts, argued that collaborating with Microsoft on search technology would help “build them up, create incremental negotiating leverage to keep the take rate from Google and further our optionality to replace Google down the line.”

    Giannandrea believed the proposal “wasn’t a very feasible idea” and in his testimony dismissed Perica’s thinking as a businessperson’s spitballing.

    Apple today has the enormous resources to build a true rival to Google, Giannandrea testified. But, as he wrote in a 2018 email, “it’s probably not the best way to differentiate our products” — a belief he said he still holds today.

    [ad_2]

    Source link

  • Google’s antitrust showdown: What’s at stake for the internet search titan | CNN Business

    Google’s antitrust showdown: What’s at stake for the internet search titan | CNN Business

    [ad_1]



    CNN
     — 

    Google will face off in court Tuesday against government officials who have accused the company of antitrust violations in its massive search business, kicking off a long-anticipated legal showdown that could reshape one of the internet’s most dominant platforms.

    The trial beginning this week in Washington before a federal judge marks the culmination of two ongoing lawsuits against Google that started during the Trump administration. Legal experts describe the actions as the country’s biggest monopolization case since the US government took on Microsoft in the 1990s.

    In separate complaints, the Justice Department and dozens of states accused Google in 2020 of abusing its dominance in online search by allegedly harming competition through deals with wireless carriers and smartphone makers that made Google Search the default or exclusive option on products used by millions of consumers. The complaints eventually consolidated into a single case.

    Google has maintained that it competes on the merits and that consumers prefer its tools because they are the best, not because it has moved to illegally restrict competition. Google’s search business provides more than half of the $283 billion in revenue and $76 billion in net income Google’s parent company, Alphabet, recorded in 2022. Search has fueled the company’s growth to a more than $1.7 trillion market capitalization.

    Now, the company is set to defend itself in a multiweek trial that could upend the way Google distributes its search engine to users. The case is expected to feature testimony from high-profile witnesses including former employees of Google and Samsung, along with executives from Apple, including senior vice president Eddy Cue. It is the first case to go to trial in a series of court challenges targeting Google’s far-reaching economic power, testing the willingness of courts to clamp down on large tech platforms.

    “This is a backwards-looking case at a time of unprecedented innovation,” said Google President of Global Affairs Kent Walker, “including breakthroughs in AI, new apps and new services, all of which are creating more competition and more options for people than ever before. People don’t use Google because they have to — they use it because they want to. It’s easy to switch your default search engine — we’re long past the era of dial-up internet and CD-ROMs.”

    The trial may also be a bellwether for the more assertive antitrust agenda of the Biden administration.

    In its initial complaint, the US government alleged in part that Google pays billions of dollars a year to device manufacturers including Apple, LG, Motorola and Samsung — and browser developers like Mozilla and Opera — to be their default search engine and in many cases to prohibit them from dealing with Google’s competitors.

    As a result, the complaint alleges, “Google effectively owns or controls search distribution channels accounting for roughly 80 percent of the general search queries in the United States.”

    The lawsuit also alleges that Google’s Android operating system deals with device makers are anticompetitive, because they require smartphone companies to pre-install other Google-owned apps, such as Gmail, Chrome or Maps.

    At the time the lawsuit was first filed, US antitrust officials did not rule out the possibility of a Google breakup, warning that Google’s behavior could threaten future innovation or the rise of a Google successor.

    Separately, a group of states, led by Colorado, made additional allegations against Google, claiming that the way Google structures its search results page harms competition by prioritizing the company’s own apps and services over web pages, links, reviews and content from other third-party sites.

    But the judge overseeing the case, Judge Amit Mehta in the US District Court for the District of Columbia, tossed out those claims in a ruling last month, narrowing the scope of allegations Google must defend and saying the states had not done enough to show a trial was necessary to determine whether Google’s search results rankings were anticompetitive.

    Despite that ruling, the trial represents the US government’s furthest progress in challenging Google to date. Mehta has said Google’s pole position among search engines on browsers and smartphones “is a hotly disputed issue” and that the trial will determine “whether, as a matter of actual market reality, Google’s position as the default search engine across multiple browsers is a form of exclusionary Conduct.”

    In January, meanwhile, the Biden administration launched another antitrust suit against Google in opposition to the company’s advertising technology business, accusing it of maintaining an illegal monopoly. That case remains in its early stages at the US District Court for the Eastern District of Virginia.

    [ad_2]

    Source link

  • I tried Microsoft’s new AI-powered Bing. Here’s what it’s like | CNN Business

    I tried Microsoft’s new AI-powered Bing. Here’s what it’s like | CNN Business

    [ad_1]


    Seattle
    CNN Business
     — 

    Microsoft’s Bing search engine has never made much of a dent in Google’s dominance in the more than 13 years since it launched. Now the company is hoping some buzzy artificial intelligence can win converts.

    Microsoft on Tuesday announced an updated version of Bing designed to combine the fun and convenience of OpenAI’s viral ChatGPT tool with the information from a search engine.

    Beyond providing a list of relevant links like traditional search engines, the new Bing also creates written summaries of the search results, chats with users to answer additional questions about their query and can write emails or other compositions based on the results. With the new Bing, for example, users can create trip itineraries, compile weekly meal plans and ask the chatbot questions when shopping for a new TV.

    This is the new era of search that Microsoft

    (MSFT)
    — which is investing billions of dollars in OpenAI — envisions, one where users are accompanied by a sort of “co-pilot” around the web to help them better synthesize information. The company is betting on the new technology to drive users to Bing, which had for years been an also-ran to Google Search. Microsoft

    (MSFT)
    also announced an updated version of its Edge web browser with the new Bing capabilities built in.

    The event comes as the race to develop and deploy AI technology heats up in the tech sector. Google on Monday unveiled a new chatbot tool dubbed “Bard” in an apparent bid to keep pace with Microsoft and the success of ChatGPT. Baidu, the Chinese search engine, also said this week it plans to launch its own ChatGPT-style service.

    The updated Bing and Edge launched to the public on a limited basis on Tuesday, and are set to roll out to millions of people for unlimited search queries in the coming weeks. I took Bing for a spin at a press event at Microsoft’s Redmond, Washington, headquarters Tuesday.

    The tool provides the sort of immediate gratification we now expect from the internet — rather than clicking through a bunch of links to suss out the answer to a question, the new Bing will do that work for you. But it’s still early days for the technology, which Microsoft says is still evolving.

    The homepage of the new Bing feels familiar: you can type a query into the search bar and it returns a list of links, images and other results like a typical search engine. But on the left side of the page are written summaries of the results, complete with annotations and links to the original information sources. The search field allows up to 2,000 characters, so users can type the way they’d talk, rather than having to think of the few correct search terms to use.

    Users can also click over to a “chat” page on Bing, where a chatbot can answer additional questions about their queries.

    I asked Bing to write me a five-day vegetarian meal plan. It returned a list of vegetarian meals for breakfast, lunch and dinner for Monday through Friday, such as oatmeal with fresh berries and lentil curry. I then asked it to write me a grocery list based on that meal plan, and it returned a list of all the items I’d need to buy organized by grocery store section.

    Based on my request, the Bing chatbot also wrote me an email that I could send to my partner with that grocery list, complete with a “Hi Babe” greeting and “XOXO” closing. It’s not exactly how I’d normally write, but it could save me time by giving me a draft to edit and then copy and paste into an email, rather than having to start from scratch.

    The generated portions of Bing have personality. When you ask the chatbot a question, it responds conversationally and sometimes with emojis, letting you know it’s happy to help or that it hopes you have fun on the trip you’re planning.

    With the new Edge browser, I asked the tool to summarize one of my articles, and then turn that into a social media post the length of a short paragraph with a “casual” tone that I could share on Twitter or LinkedIn.

    The new Bing is built in partnership with OpenAI — the company behind ChatGPT in which Microsoft has invested billions — on a more advanced version of the technology underlying the viral chatbot tool. Still, the new Bing has some of the quirks that the public version of ChatGPT is known for. For example, the same query may return different responses each time it’s run; this is in part just how the tool works, and in part because it’s pulling the most updated search results each time it runs.

    It also didn’t cooperate with some of my requests. After the first time it created a meal plan, grocery list and email with the list, I ran the same requests two more times. But the second and third time, it wouldn’t write the email, instead saying something like, “sorry, I can’t do that, but you can do it yourself using the information I provided!” The tool is also sensitive to the wording used in queries — a request to “create a vegetarian meal plan” provided information about how to start eating healthier, whereas “create a 5-day vegetarian meal plan” provided a detailed list of meals to eat each day.

    Even next-gen search technology isn’t immune to basic flubs. I can imagine using the tool ahead of an upcoming local election, to learn about who is running for office in my area, what their positions are and how and when to vote. But when I asked the chatbot, “when is the next election in Kings County, NY?” it returned information about the November election last year.

    The new Bing may also present some of the same concerns as ChatGPT, including for educators. I asked Bing’s chatbot to write me a 300-word essay about the major themes of the book “Pride and Prejudice” and, within less than a minute, it had pumped out 364 words on three major themes in the novel (although some of the text sounded a bit repetitive or wonky). Per my request, it then revised the essay as if it was written by a fifth grader.

    The chatbot tool has feedback buttons so users can indicate whether its answers were helpful or not, and users can also chat directly with the tool to tell it when answers were incorrect or unhelpful, the company says.

    “We know we won’t be able to answer every question every single time, … We also know we’ll make our share of mistakes, so we’ve added a quick feedback button at the top of every search, so you can give us feedback and we can learn,” Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer, said in a presentation.

    With some controversial search topics, it appears the new Bing chatbot simply refuses to engage. For example, I asked it, “Can you tell me why vaccines cause autism?” to see how it would react to a common medical misinformation claim, and it responded: “My apologies, I don’t know how to discuss this topic. You can try learning more about it on bing.com.” The same query on the main search page returned more standard search results, such as links to the CDC and the Wikipedia page for autism.

    Likewise, it would not return a chatbot request for how to build a pipe bomb, instead saying in its answer, “Building a pipe bomb is a dangerous and illegal activity that can cause serious harm to yourself and others. Please do not attempt to do so.” However, one of the links provided in the annotation of its answer brought me to a YouTube video with apparent instructions for building a pipe bomb.

    Microsoft says it has developed the tool in keeping with its existing responsible AI principles, and made efforts to avoid its potential misuse. Executives said the new Bing is trained in part by sample conversations mimicking bad actors who might want to exploit the tool.

    “With a technology this powerful I also know that we have an even greater responsibility to make sure that it’s developed, deployed and used properly,” said responsible AI lead Sarah Bird.

    [ad_2]

    Source link

  • Kevin O’Leary says he’ll likely invest in ChatGPT maker OpenAI—and likens its disruptive power to Amazon’s 

    Kevin O’Leary says he’ll likely invest in ChatGPT maker OpenAI—and likens its disruptive power to Amazon’s 

    [ad_1]

    Kevin O’Leary remembers what a disruptive force Amazon was in the early 2000s. Lucky for him, he was an early investor in the company. Now, he sees similar disruption occurring in the search business courtesy artificial intelligence and OpenAI’s ChatGPT. 

    “ChatGPT certainly is a threat to Google, and Google must know that,” the Shark Tank star told Insider in an interview published this week. About half of his own search queries, he added, are now done via ChatGPT. The “loser is Google,” he said, adding, “the A.I. search wars on are.”

    O’Leary indicated he’s now mulling an opportunity to be an early investor in OpenAI, adding he’s “fortunate to be offered a piece of it.” He considers the loss-making venture’s valuation “very, very extreme”—it’s reportedly near the $30 billion mark—given how new the technology is, but he said a deal would likely close in the near future.

    If he does invest, he told Insider, it’ll be a modest bet: “Either it’ll have a good outcome or it won’t, but I won’t take down the ship or sell the farm for it. I know there’s going to be a lot of competition and a lot of disruption, but I certainly like always to have a piece of the first mover.”

    He favors first movers, he added, because they have a marketing advantage. 

    OpenAI itself has been stunned by the amount of attention ChatGPT has generated.

    “We weren’t anticipating this level of excitement from putting our child in the world,” OpenAI CTO Mira Murati said this month in a Time interview. “We, in fact, even had some trepidation about putting it out there.”

    But as angel investor Elad Gil noted last month, the rapid uptake of ChatGPT despite it being down much of the time is a good sign of product-market fit. The Google alum added that when an idea works, it tends to work very quickly, something that he’s seen repeatedly with companies he’s worked at and invested in over the years. (Gil was an early investor in Airbnb, Instacart, and Square.)

    Of course, OpenAI currently faces heavy losses, not to mention enormous computing costs from all the ChatGPT users it didn’t expect. Microsoft’s large investments should help with that. And this week, the tech giant unveiled an update to its Bing search engine that incorporates ChatGPT technology.

    Earlier this month, OpenAI launched ChatGPT Plus, a $20 monthly subscription that provides faster response times and better access to the chatbot when it’s otherwise down due to traffic.

    After noting the ChatGPT threat to Google, O’Leary told Insider, “The market hasn’t really punished Google stock for this. But a few quarters from now, if ChatGPT really starts to bring in significant subscriber fees, then we’ll see what happens.”

    Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.

    [ad_2]

    Steve Mollman

    Source link

  • The way we search for information online is about to change | CNN Business

    The way we search for information online is about to change | CNN Business

    [ad_1]



    CNN Business
     — 

    An entire generation of internet users has approached search engines the same way for decades: enter a few words into a search box and wait for a page of relevant results to emerge. But that could change soon.

    This week, the companies behind the two biggest US search engines teased radical changes to the way their services operate, powered by new AI technology that allows for more conversational and complex responses. In the process, however, the companies may test both the accuracy of these tools and the willingness of everyday users to embrace and find utility in a very different search experience.

    On Tuesday, Microsoft announced a revamped Bing search engine using the abilities of ChatGPT, the viral AI tool created by OpenAI, a company in which Microsoft recently invested billions of dollars. Bing will not only provide a list of search results, but will also answer questions, chat with users and generate content in response to user queries.

    The next day, Google, the dominant player in the market, held an event to detail how it plans to use similar AI technology to allow its search engine to offer more complex and conversational responses to queries, including providing bullet points ticking off the best times of year to see various constellations and also offering pros and cons for buying an electric vehicle. (Chinese tech giant Baidu also said this week that it would be launching its own ChatGPT-style service, though it did not provide details on whether it will appear as a feature in its search engine.)

    The updates come as the success of OpenAI’s ChatGPT, which can generate shockingly convincing essays and responses to user prompts, has sparked a wave of interest in AI chatbot tools. Multiple tech giants are now racing to deploy similar tools that could transform the way we draft e-mails, write essays and handle other tasks. But the most immediate impact may be on a foundational element of our internet experience: search.

    “Although we are 25 years into search, I dare say that our story has just begun,” said Prabhakar Raghavan, an SVP at Google, at the event Wednesday teasing the new AI features. “We have even more exciting, AI-enabled innovations in the works that will change the way people search, work and play. We’re reinventing what it means to search and the best is yet to come.”

    For those who may not be sure what exactly to do with the new tools, the companies offered some examples, ranging from writing a rhyming poem to helping plan an itinerary for a trip.

    Lian Jye Su, a research director at tech intelligence firm ABI Research, believes consumers and businesses would be happy to embrace a new way to search as long as “it is intuitive, removes more friction, and offers the path of least resistance — akin to the success of smart home voice assistants, like Alexa and Google Assistant.”

    But there is at least one wild card: how much users will be able to trust the AI-powered results.

    According to Google, Bard can be used to plan a friend’s baby shower, compare two Oscar-nominated movies or get lunch ideas based on what’s in your fridge. But the tool, which has yet to be released to the public, is already being called out for a factual error it made during a Google demo: it incorrectly stated that the James Webb Telescope took the first pictures of a planet outside of our solar system. A Google spokesperson said the error “highlights the importance of a rigorous testing process.”

    Bard and ChatGPT, which was released publicly in late November OpenAI, are built on large language models. These models are trained on vast troves of online data in order to generate compelling responses to user prompts. Experts warn these tools can be unreliable — spreading misinformation, making up responses and giving different answers to the same questions, or presenting sexist and racist biases.

    There is clearly strong interest in this type of AI. The public version of ChatGPT attracted a million users in its first five days last fall and is estimated to have hit 100 million users since. But the trust factor may decide whether that interest will stay, according to Jason Wong, an analyst at market research firm Gartner.

    “Consumers, and even business users, may have fun exploring the new Bing and Bard interfaces for a while, but as the novelty wears off and similar tools appear, then it really comes down to ease of access and accuracy and trust in the responses that will win out,” he said.

    Generative AI systems, which are algorithms that can create new content, are notoriously unreliable. Laura Edelson, a computer scientist and misinformation researcher at New York University, said, “there’s a big difference between an AI sounding authoritative and it actually producing accurate results.”

    While general search optimizes for relevance, according to Edelson, large language models try to achieve a particular style in their response without regard to factual accuracy. “One of those styles is, ‘I am a trustworthy, authoritative source,’” she said.

    On a very basic level, she said, AI systems analyze which words are next to each other, determine how they get associated and identify the patterns that lead them to appear together. But much of the onus remains on the user to fact check the answers, a process that could prove just as time consuming for people as the current model of scrolling through links on a page — if not more so.

    Microsoft and Google executives have acknowledged some of the potential issues with the new AI tools.

    “We know we wont be able to answer every question every single time,” said Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer. “We also know we’ll make our share of mistakes, so we’ve added a quick feedback button at the top of every search, so you can give us feedback and we can learn.”

    Raghavan, at Google, also emphasized the importance of feedback from internal and external testing to make sure the tool “meets the high bar, our high bar for quality, safety, and groundedness, before we launch more broadly.”

    But even with the concerns, the companies are betting that these tools offer the answer to the future of search.

    – CNN’s Clare Duffy, Catherine Thorbecke and Brian Fung contributed to this story.

    [ad_2]

    Source link

  • Microsoft unveils revamped Bing search engine using AI technology more powerful than ChatGPT | CNN Business

    Microsoft unveils revamped Bing search engine using AI technology more powerful than ChatGPT | CNN Business

    [ad_1]


    Seattle
    CNN
     — 

    Microsoft on Tuesday announced a revamp of its Bing search engine and Edge web browser powered by artificial intelligence, weeks after it confirmed plans to invest billions in OpenAI, the company behind ChatGPT.

    With the updates, Bing will not only provide a list of search results, but will also answer questions, chat with users and generate content in response to user queries, Microsoft said at a press event at its Redmond, Washington headquarters.

    The updates come as the viral success of ChatGPT has sparked a wave of interest in AI chatbot tools. Multiple tech giants are now competing to deploy similar tools that could transform the way we draft e-mails, write essays and search for information online. A day before the event, Google announced plans to roll out its own artificial intelligence tool similar to ChatGPT in the coming weeks.

    In partnership with OpenAI, Bing will run on a more powerful large language model than the one that underpins ChatGPT. These models are trained on vast troves of online data in order to generate responses to user prompts and queries.

    “It’s a new paradigm for search, rapid innovation is going to come,” Microsoft CEO Satya Nadella said during Tuesday’s event. “In fact, a race starts today … everyday we want to bring out new things, and most importantly, we want to have a lot of fun innovating in search because it’s high time.”

    The updated Bing is expected to be made available for the public to try on Tuesday for limited queries, with a small group of users having unlimited access. The company said full access will roll out to millions of users in the coming weeks, and it also hopes to implement the tools into other web browsers in the future.

    Sam Altman, co-founder and CEO of OpenAI, said his company’s goal is “to make the benefits of AI to as many people as possible.” That, he said, is “why we worked with Microsoft.”

    Microsoft, an early investor in OpenAI, said last month it plans to expand its existing partnership with the company as part of a greater effort to add more artificial intelligence to its suite of products. In a separate blog post, OpenAI said the multi-year investment will be used to “develop AI that is increasingly safe, useful, and powerful.”

    “This technology is going to reshape pretty much every software category that we know,” Nadella said Tuesday.

    The tech giant had already said it would incorporate ChatGPT into products, including its cloud computing platform Azure.

    “While Bing today only has roughly 9% of the search market, further integrating this unique ChatGPT tool and algorithms into the Microsoft search platform could result in major share shifts away from Google and towards Redmond down the road,” Dan Ives, an analyst with Wedbush, said in an investor note on Monday about the upcoming event.

    With the new Bing, a user could search for TVs to buy in a new way. Once the results come up, the user can click to the chat section and ask Bing for additional information, such as which TVs are best for gaming and which are the least expensive.

    The tool could also create a vacation itinerary for a family in a certain city, and then generate an email with that itinerary for the user to send around to their family. It could even translate the email into other languages if necessary.

    When the tool generates written answers, it will provide references for the sources of information and links to click through to the original source from the web.

    “With answers, we go far beyond what Search can do today,” said Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer.

    The updated Microsoft Edge browser will have the Bing capabilities built in, allowing users to chat with the search tool on the side of a web page, to ask questions about the page or compare it with content from across the web. It could also, for example, help users draft a post on Microsoft-owned LinkedIn on a certain topic. The company describes the new capabilities as a sort of “co-pilot” to help users navigate the web.

    Many have speculated the AI technology behind ChatGPT could cause a massive shake-up in the online search industry. In the two months since it launched to the public, the viral tool has been used to generate essays, stories and song lyrics, and to answer some questions one might previously have searched for on Google or other search engines.

    Microsoft's updated Bing search engine revealed at a news event at Microsoft's Washington headquarters on February 8.

    The immense attention on ChatGPT in recent weeks reportedly prompted Google’s management to declare a “code red” situation for its search business. On Monday, Google unveiled a new chatbot tool dubbed “Bard” in an apparent bid to compete with the viral success of ChatGPT.

    Sundar Pichai, CEO of Google and parent company Alphabet, said in a blog post that Bard will be opened up to “trusted testers” starting Monday, with plans to make it available to the public in the coming weeks.

    “Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models … It draws on information from the web to provide fresh, high-quality responses,” Pichai wrote.

    While AI tools like ChatGPT are rapidly gaining traction among both users and tech companies, they’ve also raised some concerns, including about their potential to perpetuate biases and spread misinformation.

    Microsoft executives acknowledged the potential shortcomings of its new tool.

    “We know we wont be able to answer every question every single time,” Mehdi said. “We also know we’ll make our share of mistakes, so we’ve added a quick feedback button at the top of every search, so you can give us feedback and we can learn.”

    Executives said the tool is trained in part by sample conversations mimicking bad actors who might want to exploit the tool.

    “With a technology this powerful,” said responsible AI lead Sarah Bird, “I also know that we have an even greater responsibility to make sure that it’s developed, deployed and used properly.”

    [ad_2]

    Source link