ReportWire

Tag: Wikipedia

  • An Unbothered Jimmy Wales Calls Grokipedia a ‘Cartoon Imitation’ of Wikipedia

    [ad_1]

    In our increasingly enshittified online experience, the last bastions of the Internet’s initial egalitarian promise shine like diamonds. These holdout Golden Era vestiges somehow remain useful and unadulterated by corporate greed, while under constant siege for their recalcitrance. The crown jewel of these stalwarts is Wikipedia. Sustained by a legion of volunteer editors and beg-a-thon donations since 2001, the humble open-source encyclopedia is generally regarded as our best effort yet to amass the sum of all human knowledge. Free, citation-filled, and perpetually self-auditing, it’s no wonder so many consider the online encyclopedia to be one of the few wonders of the digital world.

    Beyond an incalculable benefit to humans, this font of free information has also made model-training a whole lot easier for AI companies. But once Wikipedia-trained models began spitting out facts that comported with reality’s well-known liberal bias and pierced the industry’s echo chamber bubble, some were displeased. Cognitive dissonance now at the wheel, they declared Wikipedia yet another victim of the “woke mind virus” and set out to build their own Library of Alexandria. Leading the charge in this crusade is Elon Musk, who launched an AI-powered competitor, Grokipedia, last October.

    While speaking at India’s AI Impact Summit in New Delhi this week, Wikipedia co-founder and spokesperson Jimmy Wales was asked about the threat the site faced from Grokipedia and its ilk. Unbothered, he dismissed the xAI project as “a cartoon imitation of an encyclopedia.”

    Wales went on to champion the humans behind Wikipedia—and the mastery and due diligence they provide—as key ingredients to the site’s success.

    “Why do I go to Wikipedia? I go to Wikipedia because it’s human-vetted knowledge,” explained Wales. “We would not consider for a second today letting an AI just write Wikipedia articles because we know how bad they can be.”

    Wales described the propensity for AI models to “hallucinate” erroneous, misleading, or tangential information as their primary disqualifying factor. And he’s not wrong. A 2025 OpenAI study showed even their advanced models were still hallucinating at rates as high as 79% in some tests.

    As Wales explained, these sorts of errors become even more common and apparent when AI is asked to delve increasingly deeper into a subject—one that may already be niche. Where AI models fail here, their human counterparts shine. Wales touted these subject-matter experts—the “obsessives”—as the best guards against inaccuracies and providers for optimal knowledge-seeking experiences.

    “That sort of full, rich human context of understanding is actually quite important in terms of really understanding both what does the reader want and what does the reader need,” said Wales.

    If anything, Wales did Grokipedia a kindness by keeping the conversation hallucination-focused. Plenty of journalists and critics have already dug into the many controversies arising from Musk’s white nationalist, navel-gazing facsimile.

    Even with Wikipedia still being the universally agreed-upon ark of earthly info, a larger issue remains. We aren’t arguing over a shared reality anymore. With Grokipedia, a distinctly rival one has been created. And the more who use it, the further we get from ever fusing our two worlds back together.

    [ad_2]

    Justin Caffier

    Source link

  • At 25, Wikipedia Navigates a Quarter-Life Crisis in the Age of A.I.

    [ad_1]

    Turning 25 amid an A.I. boom, Wikipedia is racing to protect traffic, volunteers and revenue without losing its mission. Photo illustration by Nikolas Kokovlis/NurPhoto via Getty Images

    Traffic to Wikipedia, the world’s largest online encyclopedia, naturally ebbs and flows with the rhythms of daily life—rising and falling with the school calendar, the news cycle or even the day of the week—making routine fluctuations unremarkable for a site that draws roughly 15 billion page views a month. But sustained declines tell a different story. Last October, the Wikimedia Foundation, the nonprofit that oversees Wikipedia, disclosed that human traffic to the site had fallen 8 percent in recent months as a growing number of users turned to A.I. search engines and chatbots for answers.

    “I don’t think that we’ve seen something like this happen in the last seven to eight years or so,” Marshall Miller, senior director of product at the Wikimedia Foundation, told Observer.

    Launched on Jan. 15, 2001, Wikipedia turns 25 today. This milestone comes at a pivotal point for the online encyclopedia, which is straddling a delicate line between fending off existential risks posed by A.I. and avoiding irrelevance as the technology transforms how people find and consume information.

    “It’s really this question of long-term sustainability,” Lane Becker, senior director of earned revenue at the Wikimedia Foundation, told Observer. “We’d like to make it at least another 25 years—and ideally much longer.”

    While it’s difficult to pinpoint Wikipedia’s recent traffic declines on any single factor, it’s evident that the drop coincides with the emergence of A.I. search features, according to Miller. Chatbots such as ChatGPT and Perplexity often cite and link to Wikipedia, but because the information is already embedded in the A.I.-generated response, users are less likely to click through to the source, depriving the site of page views.

    Yet the spread of A.I.-generated content also underscores Wikipedia’s central role in the online information ecosystem. Wikipedia’s vast archive—more than 65 million articles across over 300 languages—plays a prominent role within A.I. tools, with the site’s data scraped by nearly all large language models (LLMs). “Yes, there is a decline in traffic to our sites, but there may well be more people getting Wikipedia knowledge than ever because of how much it’s being distributed through those platforms that are upstream of us,” said Miller.

    Surviving in the era of A.I.

    Wikipedia must find a way to stay financially and editorially viable as the internet changes. Declining page views not only mean that fewer visitors are likely to donate to the platform, threatening its main source of revenue, but also risk shrinking the community of volunteer editors who sustain it. Fewer contributors would mean slower content growth, ultimately leaving less material for LLMs to draw from.

    Metrics that track volunteer participation have already begun to slip, according to Miller. While noting that “it’s hard to parse out all the different reasons that this happens,” he conceded that the Foundation has “reason to believe that declines in page views will lead to declines in volunteer activity.”

    To maintain a steady pipeline of contributors, users must first become aware of the platform and understand its collaborative model. That makes proper attribution by A.I. tools essential, Miller said. Beyond simply linking to Wikipedia, surfacing metadata—such as when a page was last updated or how many editors contributed—could spur curiosity and encourage users to engage more deeply with the platform.

    Tech companies are becoming aware of the value of keeping Wikipedia relevant. Over the past year, Microsoft, Mistral AI, Perplexity AI, Ecosia, Pleias and ProRata have joined Wikimedia Enterprise, a commercial product that allows corporations to pay for large-scale access and distribution of Wikipedia content. Google and Amazon have long been partners of the platform, which was launched in 2021.

    The basic premise is that Wikimedia Enterprise customers can access content from Wikipedia at a higher volume and speed while helping sustain the platform’s mission. “I think there’s a growing understanding on the part of these A.I. companies about the significance of the Wikipedia dataset, both as it currently exists and also its need to exist in the future,” said Becker.

    Wikipedia is hardly alone in this shift. News organizations, including CNN, the Associated Press and The New York Times, have struck licensing deals with A.I. companies to supply editorial content in exchange for payment, while infrastructure providers like Cloudflare offer tools that allow websites to charge A.I. crawlers for access. Last month, the licensing nonprofit Creative Commons announced its support of a “pay-to-crawl” approach for managing A.I. bots.

    Preparing for an uncertain future

    Wikipedia itself is also adapting to a younger generation of internet users. In an effort to make editing Wikipedia more appealing, the platform is working to enhance its mobile edit features, reflecting the fact that younger audiences are far more likely to engage on smartphones than desktop computers.

    Younger users’ preference for social video platforms such as YouTube and TikTok has also pushed Wikipedia’s Future Audiences team—a division tasked with expanding readership—to experiment with video. The effort has already paid off, producing viral clips on topics ranging from Wikipedia’s most hotly disputed edits to the courtship dance of the black-footed albatross and Sino-Roman relations. The organization is also exploring a deeper presence on gaming platforms, another major draw for younger users.

    Evolving with the times also means integrating A.I. further within the platform. Wikipedia has introduced features such as Edit Check, which offers real-time feedback on whether a proposed edit fits a page, and is developing features like Tone Check to help ensure articles adhere to a neutral point of view.

    A.I.-generated content has also begun to seep onto the platform. As of August 2024, roughly 5 percent of newly created English articles on the site were produced with the help of A.I., according to a Princeton study. Seeing this as a problem, Wikipedia introduced a “speedy deletion” policy that allows editors to quickly remove content that shows clear signs of being A.I.-generated. Still, the community remains divided over whether using A.I. for tasks such as drafting articles is inherently problematic, said Miller. “There’s this active debate.”

    From streamlining editing to distributing its content ever more widely, Wikipedia is betting that A.I. can ultimately be an ally rather than an adversary. If managed carefully, the technology could help accelerate the encyclopedia’s mission over the next 25 years—as long as it doesn’t bring down the encyclopedia first.

    “Our whole thing is knowledge dissemination to anyone that wants it, anywhere that they want it,” said Becker. “If this is how people are going to learn things—and people are learning things and gaining value from the information that our community is able to bring forward—we absolutely want to find a way to be there and support it in ways that align with our values.”

    At 25, Wikipedia Navigates a Quarter-Life Crisis in the Age of A.I.

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Grokipedia copies Wikipedia, but omits references

    [ad_1]

    In the age of generative artificial intelligence and AI-assisted search engines, Wikipedia remains an information repository authored by humans.

    Elon Musk, billionaire and former advisor to President Donald Trump, sought to create an AI-powered alternative: Grokipedia. 

    “Grokipedia will exceed Wikipedia by several orders of magnitude in breadth, depth and accuracy,” Musk posted on X the day after his site went live Oct. 27. 

    Yet PolitiFact found Grokipedia’s articles are often almost entirely lifted from Wikipedia. And when the entries differ, Grokipedia’s information quality and sourcing is problematic and error-prone, making it a less reliable research tool.

    Musk said on an Oct. 31 episode of the “All-In” tech and business podcast that his team instructed his company’s chatbot, Grok, to go through the top 1 million Wikipedia articles and then “add, modify and delete.”

    “So that means research the rest of the internet, whatever is publicly available, and correct the Wikipedia articles, fix mistakes, but also add a lot more context,” he said on the podcast.

    Grokipedia articles often contain the text “Fact-checked by Grok.”

    PolitiFact reviewed Grokipedia articles and found that when they include language that’s different from what appeared on Wikipedia, the new content:

    • Is not supported by citations;

    • Does not provide references; or

    • Introduces misleading or opinionated claims.

    Grokipedia often also removes context from its articles. 

    A sample of Grokipedia’s 885,279 articles reveals they are subject to a similar AI-related phenomenon we first saw in May, prior to the tool’s unveiling. Health and Human Services Secretary Robert F. Kennedy Jr. then released a Make America Healthy Again report that contained several erroneous citations, including crediting sources that did not exist.

    Joseph Reagle, Northeastern University associate professor of communication studies, said Grokipedia misunderstands Wikipedia’s and AI’s strengths.

    “Wikipedia’s merits are that it is the result of a community of thousands of people diligently working to create high-quality content,” Reagle said, while AI is useful when it’s interactive and accepts pushback.

    Hundreds of thousands of volunteers worldwide contribute content to Wikipedia, guided by the platform’s editorial policies and guidelines.

    The Wikimedia Foundation, the nonprofit that operates Wikipedia, is aware of Grokipedia’s copying problem.

    “Even Grokipedia needs Wikipedia to exist,” said Selena Decklemann, chief product and technology officer at the Wikimedia Foundation, in a statement to PolitiFact. “Wikipedia’s content is open source by design; we expect it will be used in good faith to educate. This issue is especially urgent as platforms like Grokipedia increasingly draw on our articles, selectively extracting content — written by thousands of volunteers — and filtering it through opaque and unaccountable algorithms.”

    Entries are nearly identical, except for wrong or missing references

    We looked at Grokipedia articles covering various topics including science, music and economics. In many articles we reviewed, Grokipedia links to Wikipedia articles with this statement: “The content is adapted from Wikipedia, licensed under Creative Commons Attribution-ShareAlike 4.0 License.” 

    That means Wikipedia’s licensing allows Grokipedia to copy, redistribute and adapt the content with an attribution. It also requires Grokipedia to give the same permissions for its adapted content. (There are some articles that don’t copy from Wikipedia and don’t feature this statement, such as the article for Joseph Stalin.)

    Grokipedia’s article structure is similar to Wikipedia’s, which features reference lists at the bottom. But in some instances, Grokipedia copies Wikipedia articles while omitting their citations and reference lists.

    Grokipedia’s article for “Monday,” for example, includes information about the day of the week’s etymology, related religious observances and cultural references. But it contains no citations other than to say it was adapted from Wikipedia.

    The Grokipedia article was a 96% match of Wikipedia’s “Monday” article, according to Copyscape, a plagiarism checker. The Wikipedia article, however, listed 22 references.

    Sometimes Grokipedia botches citations. In the entry for “culminating point,” Grokipedia cited the wrong book chapter in which military theorist Carl von Clausewitz introduced the concept. The rest of the article text is copied from Wikipedia.

    Grokipedia and Wikipedia versions of “culminating point” (screenshots from Grokipedia and Wikipedia)

    One article that differs significantly from its Wikipedia counterpart is the entry for “Hello,” a song by British singer Adele. Multiple items in the Grokipedia reference list are Instagram reels that provide secondhand, unattributed information and commentary. Wikipedia’s standards say such user-generated content is “generally unacceptable as sources.”

    In the entry for the Canadian singer Feist, Grokipedia copied from Wikipedia but added a line saying her father died in May 2021. The citation led to Vice’s 2017 ranking of the 60 best Canadian indie rock songs, an article that doesn’t mention the death of Feist’s father, who was still alive that year. 

    Grokipedia lacks transparency on correcting errors

    PolitiFact found at least one instance when Grokipedia introduced misleading information. The Grokipedia and Wikipedia articles for “Nobel Prize in Physics” are largely the same, but one sentence Grokipedia added said, “Physics is traditionally the first award presented in the Nobel Prize ceremony.” It did not provide a citation, and it appears to be wrong: In at least the past few years, the Nobel Prize for Physiology or Medicine was awarded first.

    “Unlike Grokipedia, which relies on rapid AI-generated content with limited transparency and oversight, Wikipedia’s processes are open to public review and rigorously document the sources behind every article,” Decklemann said.

    Wikipedia allows anyone to contribute and edit articles, and ensures transparency by making the history of an article viewable. Some volunteers have advanced permissions and are equipped to address negative behavior on the platform. 

    On Grokipedia, registered users can suggest edits to published articles. But Grokipedia has no feature allowing readers to view what edits have been made. It is unclear what happens when there are errors — whether a human or Grok corrects them, how those changes are deliberated, and how long it takes to update pages.

    PolitiFact Researcher Caryn Baird contributed to this report.

    [ad_2]

    Source link

  • Elon Musk’s Grokipedia Pushes Far-Right Talking Points

    [ad_1]

    On Monday, Elon Musk’s xAI startup launched Grokipedia, which the billionaire is pitching as an AI-generated alternative to the crowdsourced encyclopedia Wikipedia. Musk first announced the project in late September on his social media platform X, saying it would be “a massive improvement over Wikipedia,” and “a necessary step towards the xAI goal of understanding the Universe.”

    Musk said last week that he had delayed the launch of Grokipedia because his team needed “to do more work to purge out the propaganda.” When Grokipedia eventually dropped on Monday, WIRED was initially unable to access the website and received an automated message that it was blocked.

    When we finally got access to it, WIRED found that the online encyclopedia contained lengthy entries generated by AI. While many of the pages WIRED saw on launch day appeared fairly similar to Wikipedia in terms of tone and content, a number of notable Grokipedia entries denounced the mainstream media, highlighted conservative viewpoints, and sometimes perpetuated historical inaccuracies.

    The Grokipedia entry about the slavery of African Americans in the US includes a section outlining numerous “ideological justifications” made for slavery, including the “Shift from Necessary Evil to Positive Good.” The end of the entry focuses on criticisms of The 1619 Project, which it says incorrectly framed “slavery as the central engine of the nation’s political, economic, and cultural development.”

    Entries for more recent historical events put conservative perspectives at the center. When WIRED searched for “gay marriage” in Grokipedia, no entry popped up, but one of the on-screen suggestions was for “gay pornography” instead. This entry in Grokipedia falsely states that the proliferation of porn exacerbated the HIV/AIDS epidemic in the 1980s.

    “This marked the onset of what would become a devastating crisis disproportionately affecting gay male communities, where behaviors idealized in pornography—such as unprotected receptive anal intercourse and multiple anonymous partners—aligned directly with primary transmission routes, leading to rapid seroconversion rates,” the Grokipedia entry claims.

    xAI did not immediately return a request for comment.

    The Grokipedia entry for “transgender” includes two mentions of “transgenderism,” a term commonly used to denigrate trans people. The entry also refers to trans women as “biological males” who have “generated significant conflicts, primarily centered on risks to women’s safety, privacy, and sex-based protections established to mitigate male-perpetrated violence.” The opening section highlights social media as a potential “contagion” that is increasing the number of trans people.

    [ad_2]

    Reece Rogers

    Source link

  • Elon Musk’s Version of Wikipedia Is Live. Here’s What the Difference Is

    [ad_1]

    Grokipedia, Elon Musk’s alternative to Wikipedia, sparked to life on Monday afternoon. Then it went dark again. Then it sparked to life again Monday evening. As of this writing, it is up. In terms of style, it looks like Wikipedia with its optional dark mode turned on. 

    Based on a very brief review, it contains articles on most topics one would most easily expect in an encyclopedia, professing to have just under 900,000 in total. The website Wikicount says there are about 7 million English Wikipedia articles

    There are gaps in Grokipedia, however. For instance, at the moment it does not currently appear to have a dedicated article about the Department of Government Efficiency (DOGE). 

    Musk announced that his AI company, xAI, was building an online encyclopedia late last month, saying it would be a “massive improvement over Wikipedia.” The idea for Grokipedia was born during an appearance by Musk on the All-In Podcast. Host and fellow high net worth individual David Sacks asked Musk point blank to create it, noting that Wikipedia was “hyper-partisan” and full of “activists.” 

    Here are some comparisons between Wikipedia articles and Grokipedia articles. These are copied verbatim from the intros of articles with the footnotes and links removed for ease of reading: 

    Nick Fuentes

    Wikipedia:

    Nicholas Joseph Fuentes (born August 18, 1998) is an American far-right political commentator, activist and live streamer. He identifies as a Christian nationalist and part of the incel subculture. His views have been described as racist, white supremacist, misogynistic, homophobic, antisemitic, and Islamophobic. Furthermore, he has been described as a neo-Nazi and a Holocaust denier. His supporters are known as Groypers.

    Grokipedia:

    Nicholas Joseph Fuentes (born August 18, 1998) is an American political commentator, live streamer, and activist who hosts the nightly internet program America First and organizes the annual America First Political Action Conference (AFPAC). A self-professed traditional Catholic, Fuentes advocates for policies centered on national sovereignty, including a moratorium on immigration, withdrawal from foreign entanglements, and preservation of America’s historic European-descended majority. His rise began as a teenager supporting Donald Trump’s 2016 presidential campaign, after which he withdrew from Boston University amid threats following his attendance at the 2017 Unite the Right rally in Charlottesville, Virginia.

    Great Replacement

    Wikipedia:

    Title: “Great Replacement conspiracy theory”

    The Great Replacement (French: grand remplacement), also known as replacement theory or great replacement theory, is a debunked white nationalist far-right conspiracy theory coined by French author Renaud Camus. Camus’ theory states that, with the complicity or cooperation of “replacist” elites, the ethnic French and white European populations at large are being demographically and culturally replaced by non-white peoples—especially from Muslim-majority countries—through mass migration, demographic growth and a drop in the birth rate of white Europeans. Since then, similar claims have been advanced in other national contexts, notably in the United States. Mainstream scholars have dismissed these claims of a conspiracy of “replacist” elites as rooted in a misunderstanding of demographic statistics and premised upon an unscientific, racist worldview.

    Grokipedia:

    Title: “Great Replacement theory”

    The Great Replacement is a theory originated by French writer Renaud Camus in his 2011 essay Le Grand Remplacement, which contends that the historic populations of European nations are undergoing demographic substitution through sustained high levels of immigration from Africa and the Middle East, compounded by sub-replacement fertility rates among native Europeans (typically below 1.5 children per woman in most EU countries) and comparatively higher fertility among immigrant groups (often exceeding 2.0). Camus describes this process as an orchestrated “genocide by substitution,” driven by elite policies favoring multiculturalism over national continuity, resulting in the erosion of indigenous cultural dominance without violent conflict. Empirical underpinnings include United Nations projections indicating that net migration will account for nearly all population stability or growth in Europe amid native decline, with foreign-born residents comprising nearly 10% of the EU population by 2024 and contributing disproportionately to births—one-third or more in nations like Austria and Germany

    Greta Thunberg

    Wikipedia:

    Greta Tintin Eleonora Ernman Thunberg (Swedish: [ˈɡreːta ˈtʉːnbærj]; born 3 January 2003) is a Swedish activist best known for pressuring governments to address climate change and social issues. She gained global attention in 2018, at age 15, after starting a solo school strike outside the Swedish parliament, which inspired the worldwide Fridays for Future movement. She has since broadened her focus to include human rights and global justice, voicing support for Ukraine, Palestine, Armenia, and Western Sahara. In 2025, Thunberg twice joined a humanitarian flotilla bound for the Gaza Strip.

    Thunberg has been credited with sparking the “Greta effect”, influencing environmental awareness and youth engagement worldwide. She has received numerous honors, including inclusion in Time’s 100 Most Influential People and being named Person of the Year in 2019.

    Grokipedia:

    Greta Thunberg (born 3 January 2003) is a Swedish activist recognized for launching a solo school strike for climate action on 20 August 2018 outside the Swedish parliament in Stockholm, an action that catalyzed the international Fridays for Future movement involving millions of students protesting government inaction on climate change.

    Thunberg, diagnosed at age 11 or 12 with Asperger syndrome—a form of autism spectrum disorder—along with obsessive-compulsive disorder and selective mutism, has described these conditions as providing her with a focused “superpower” for advocacy. Her high-profile speeches at venues including the United Nations General Assembly and the World Economic Forum in Davos amplified calls for immediate emissions reductions and policy shifts, earning her Time magazine’s Person of the Year designation in 2019 as the youngest recipient. While credited with elevating youth engagement on environmental issues, Thunberg’s promotion of urgent, existential climate threats has drawn scrutiny for diverging from nuanced empirical assessments of climate risks and adaptation capacities, as well as for extending her activism into broader political arenas such as anti-capitalist and geopolitical protests.

    Overall, Grokipedia gives off the impression of a site where topics and people that Elon Musk likes or supports are presented without framings that cast any doubt on their validity, and those he dislikes are presented with criticism front-and-center.

    As others have pointed out, some articles are strikingly similar to Wikipedia’s, and contain notes at the bottom saying they were adapted from Wikipedia under a ShareAlike 4.0 license, which would seem to indicate that those particular Grokipedia articles are also available to share freely. However, the url for Grokipedia is at the .com top-level domain, not the .org domain like Wikipedia.

    Grokipedia also mostly (or perhaps entirely) lacks photos and illustrations. It’s understandable that biographical articles don’t have portraits, but articles like “Tesseract” would benefit from clarifying illustrations and even animations, like on Wikipedia

    Some Grokipedia articles are quite long and detailed—long past the point of general interest. For instance, the article for Gizmodo, while seemingly accurate after a brief scan, seems like it would benefit from a human editor. 

    Overall, the project seems very much like what it purports to be: a version of Wikipedia with articles written by Grok, a large language model that favors Elon Musk’s views. 

    Gizmodo reached out to xAI about all of this, asking for comment. That email received an immediate, three-word reply: “Legacy Media Lies.”  

    [ad_2]

    Mike Pearl

    Source link

  • Wikipedia says traffic is falling due to AI search summaries and social video | TechCrunch

    [ad_1]

    Wikipedia is often described as the last good website on an internet increasingly filled with toxic social media and AI slop. But it seems the online encyclopedia is not completely immune to broader trends, with human pageviews falling 8% year-over-year, according to a new blog post from Marshall Miller of the Wikimedia Foundation.

    The foundation works to distinguish between traffic from humans and bots, and Miller writes that the decline “over the past few months” was revealed after an update to Wikipedia’s bot detection systems appeared to show that “much of the unusually high traffic for the period of May and June was coming from bots that were built to evade detection.”

    Why is traffic falling? Miller points to “the impact of generative AI and social media on how people seek information,” particularly as “search engines are increasingly using generative AI to provide answers directly to searchers rather than linking to sites like ours” and as “younger generations are seeking information on social video platforms rather than the open web.” (Google has disputed the claim that AI summaries reduce traffic from search.)

    Miller says the foundation welcomes “new ways for people to gain knowledge” and argues this doesn’t make Wikipedia any less important, since knowledge sourced from the encyclopedia is still reaching people even if they don’t visit the website. Wikipedia even experimented with AI summaries of its own, though it paused the effort after editors complained.

    But this shift does present risks, particularly if people are becoming less aware of where their information actually comes from. As Miller puts it, “With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.” (Some of those volunteers are truly remarkable, reportedly disarming a gunman at a Wikipedia editors’ conference on Friday.)

    For that reason, he argues that AI, search, and social companies using content from Wikipedia “must encourage more visitors” to the website itself.

    And he says Wikipedia is taking steps of its own, for example by developing a new framework for attributing content from the encyclopedia. The organization also has two teams tasked with helping Wikipedia reach new readers, and it’s looking for volunteers to help.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Miller also encourages readers to “support content integrity and content creation” more broadly.

    “When you search for information online, look for citations and click through to the original source material,” he writes. “Talk with the people you know about the importance of trusted, human curated knowledge, and help them understand that the content underlying generative AI was created by real people who deserve their support.”

    [ad_2]

    Anthony Ha

    Source link

  • Wikipedia Is Getting Pretty Worried About AI

    [ad_1]

    Over at the official blog of the Wikipedia community, Marshall Miller untangled a recent mystery. “Around May 2025, we began observing unusually high amounts of apparently human traffic,” he wrote. Higher traffic would generally be good news for a volunteer-sourced platform that aspires to reach as many people as possible, but it would also be surprising: The rise of chatbots and the AI-ification of Google Search have left many big websites with fewer visitors. Maybe Wikipedia, like Reddit, is an exception?

    Nope! It was just bots:

    This [rise] led us to investigate and update our bot detection systems. We then used the new logic to reclassify our traffic data for March–August 2025, and found that much of the unusually high traffic for the period of May and June was coming from bots that were built to evade detection … after making this revision, we are seeing declines in human pageviews on Wikipedia over the past few months, amounting to a decrease of roughly 8% as compared to the same months in 2024.

    To be clearer about what this means, these bots aren’t just vaguely inauthentic users or some incidental side effect of the general spamminess of the internet. In many cases, they’re bots working on behalf of AI firms, going undercover as humans to scrape Wikipedia for training or summarization. Miller got right to the point. “We welcome new ways for people to gain knowledge,” he wrote. “However, LLMs, AI chatbots, search engines, and social platforms that use Wikipedia content must encourage more visitors to Wikipedia.” Fewer real visits means fewer contributors and donors, and it’s easy to see how such a situation could send one of the great experiments of the web into a death spiral.

    Arguments like this are intuitive and easy to make, and you’ll hear them beyond the ecosystem of the web: AI models ingest a lot of material, often without clear permission, and then offer it back to consumers in a form that’s often directly competitive with the people or companies that provided it in the first place. Wikipedia’s authority here is bolstered by how it isn’t trying to make money — it’s run by a foundation, not an established commercial entity that feels threatened by a new one — but also by its unique position. It was founded as a stand-alone reference resource before settling ambivalently into a new role: A site that people mostly just found through Google but in greater numbers than ever. With the rise of LLMs, Wikipedia became important in a new way as a uniquely large, diverse, well-curated data set about the world; in return, AI platforms are now effectively keeping users away from Wikipedia even as they explicitly use and reference its materials.

    Here’s an example: Let’s say you’re reading this article and become curious about Wikipedia itself — its early history, the wildly divergent opinions of its original founders, its funding, etc. Unless you’ve been paying attention to this stuff for decades, it may feel as if it’s always been there. Surely, there’s more to it than that, right? So you ask Google, perhaps as a shortcut for getting to a Wikipedia page, and Google uses AI to generate a blurb that looks like this:

    This is an AI Overview that summarizes, among other things, Wikipedia. Formally, it’s pretty close to an encyclopedia article. With a few formatting differences — notice the bullet-point AI-ese — it hits a lot of the same points as Wikipedia’s article about itself. It’s a bit shorter than the top section of the official article and contains far fewer details. It’s fine! But it’s a summary of a summary.

    The next option you encounter still isn’t Wikipedia’s article — that shows up further down. It’s a prompt to “Dive deeper in AI Mode.” If you do that, you see this:

    It’s another summary, this time with a bit of commentary. (Also: If Wikipedia is “generally not considered a reliable source itself because it is a tertiary source that synthesizes information from other places,” then what does that make a chatbot?) There are links in the form of footnotes, but as Miller’s post suggests, people aren’t really clicking them.

    Google’s treatment of Wikipedia’s autobiography is about as pure an example as you’ll see of AI companies’ effective relationship to the web (and maybe much of the world) around them as they build strange, complicated, but often compelling products and deploy them to hundreds of millions of people. To these companies, it’s a resource to be consumed, processed, and then turned into a product that attempts to render everything before it is obsolete — or at least to bury it under a heaping pile of its own output.

    [ad_2]

    John Herrman

    Source link

  • AI Is Killing Wikipedia’s Human Traffic

    [ad_1]

    The Wikimedia Foundation, the nonprofit that runs Wikipedia, says that shifts in how people search for information online are cutting into its human traffic.

    In a blog post published today, Marshall Miller, the foundation’s senior director of product, said Wikipedia’s human visits are down about 8% over the past few months compared to the same period in 2024.

    The decline was revealed after the Foundation revised how it distinguishes between human and bot traffic, something it does to better understand real readership and enforce limits on how third-party bots scrape its data for commercial search and AI tools. The update came after Wikimedia noticed what looked like a spike in human traffic from Brazil, which turned out to be mostly bots.

    “We believe that these declines reflect the impact of generative AI and social media on how people seek information, especially with search engines providing answers directly to searchers, often based on Wikipedia content,” Miller wrote.

    He wrote that the drop wasn’t exactly a surprise. Search engines are increasingly using AI to surface answers directly on results pages instead of linking to external sites like Wikipedia. At the same time, younger users are turning to platforms like YouTube and TikTok for information.

    Unfortunately, these shifts could lead to negative ripple effects for Wikipedia. With fewer visits, Wikipedia’s volunteer base, the community that writes and edits its content, could shrink, Miller warned. And with less traffic, individual donations that keep the nonprofit running could also decline.

    The situation is ironic, Miller noted, because almost all large language models (LLMS) rely on Wikipedia’s datasets for training. Yet in doing so, they may be hurting one of their most trusted sources of reliable information. Because of this, Wikimedia is urging LLMs, AI chatbots, search engines, and social platforms that use Wikipedia content to help drive more traffic back to the site.

    In order to combat the issue, the nonprofit said it’s working to ensure third parties can access and reuse Wikipedia content responsibly and at scale by enforcing its policies and developing clearer attribution standards. It’s also experimenting with new ways to reach younger audiences on platforms like YouTube, TikTok, Roblox, and Instagram, via videos, games, and chatbots.

    Wikimedia itself isn’t anti-AI. Just this month, the Foundation launched the Wikidata Embedding Project, a new resource that converted roughly 120 million open data points in Wikidata into a format that’s easier for large language models to use. The goal is to give AI systems access to free, higher-quality data and improve the accuracy of their answers.

    [ad_2]

    Bruce Gil

    Source link

  • Wikimedia says AI bots and summaries are hurting Wikipedia’s traffic

    [ad_1]

    Wikimedia is sounding the alarm on the impact AI is having on reliable knowledge and information on the internet. In a , Wikimedia’s senior director of product, Marshall Miller, lays out the impact on page views that the foundation attributes to the rise of LLM chatbots and AI-generated summaries in search results.

    “We believe that these declines reflect the impact of generative AI and social media on how people seek information, especially with search engines providing answers directly to searchers, often based on Wikipedia content,” said Miller.

    The foundation has increasingly faced whose sophistication has made it difficult to parse human traffic from bots. After improving bot detection to yield more accurate metrics, Wikipedia’s data shows an 8 percent drop in page views year over year.

    Miller paints a picture of an existential risk greater than that of a website’s page views. He posits that if Wikipedia’s traffic continues to decline, it could threaten what he calls “the only site of its scale with standards of verifiability, neutrality and transparency powering information all over the internet.” He warns that fewer visits to Wikipedia would lead to fewer volunteers, less funding and ultimately less reliable content.

    The solution he offers is for LLMs and search results to be more intentional in giving users the opportunity to interact directly with the source for the information being presented. “For people to trust information shared on the internet, platforms should make it clear where the information is sourced from and elevate opportunities to visit and participate in those sources,” Miller writes.

    Earlier this summer, Wikipedia floated the idea of AI-generated summaries that would appear at the top of articles. The project was before it began after fierce backlash from the site’s volunteer editors.

    [ad_2]

    Andre Revilla

    Source link

  • Why Elon Musk Says He’s Starting ‘Grokipedia’

    [ad_1]

    Elon Musk says his company xAI is planning an alternative to Wikipedia. He first mentioned the so-called “Grokipedia” after Wikipedia co-founder Larry Sanger, who has been critical of the platform he helped to build, joined former Fox News host and conservative commentator Tucker Carlson on his podcast. 

    “We are building Grokipedia @xAI,” Musk posted Tuesday on social media platform X. “Will be a massive improvement over Wikipedia. Frankly, it is a necessary step towards the xAI goal of understanding the Universe.”

    Carlson posted his podcast episode with Sanger to X on Monday. During the over 90-minute conversation, Sanger inflamed the far right by sharing sources that Wikipedia has flagged for attribution, as well as sources that have been “fully greenlit.”

    “The blacklisted sources are Breitbart, Daily Caller, Epoch Times, Fox News, New York Post, The Federalist, so you can’t use those as sources on Wikipedia,” Sanger told Carlson.

    The problem with this, as The Daily Beast noted, is that only Breitbart is officially blacklisted, whereas the others are considered either “generally unreliable” or “deprecated,” meaning they can be used for some citation purposes such as uncontroversial self-descriptions. Furthermore, Wikipedia considers Fox News’ reporting on politics and science to be “generally unreliable,” but categorizes its other reporting as “no consensus,” meaning it is marginally reliable. It’s also worth noting that Wikipedia also considers some left-leaning media companies such as the aforementioned Daily Beast and HuffPost Politics as only marginally reliable.

    Even so, the episode set off a wave of conservative backlash on X. Musk’s PayPal mafia colleague and current Trump administration AI and crypto czar David Sacks called Wikipedia “hopelessly biased.” Social Capital founder and CEO Chamath Palihapitiya also chimed in on X, describing Wikipedia as a “massive psy-op.”

    For his part, Musk has been critical of Wikipedia for years, accusing it of bias, questioning its leadership, and calling for its defunding. The defund post on X followed shortly after another in which Musk referred to Wikipedia as “an extension of legacy media propaganda,” because the website pointed out that his gestures during Trump’s inauguration had been compared to Nazi salutes. 

    “I think Elon is unhappy that Wikipedia is not for sale,” Wikipedia’s other co-founder Jimmy Wales posted on X at the time. “I hope his campaign to defund us results in lots of donations from people who care about the truth. If Elon wanted to help, he’d be encouraging kind and thoughtful intellectual people he agrees with to engage.”

    Musk didn’t share many details about what a so-called Grokipedia might entail. But he did call on X users to “join @xAI and help build Grokipedia, an open source knowledge repository that is vastly better than Wikipedia,” adding, “This will be available to the public with no limits on use.”

    After Musk announced his intentions to launch his own version of Wikipedia, powered by his controversial chatbot, Grok, Sanger expressed his own reservations about the plan.

    “Let’s hope it won’t be as biased as Grok itself,” he wrote on X.

    [ad_2]

    Chloe Aiello

    Source link

  • Elon Musk’s Wikipedia Competitor Is Going to Be a Disaster

    [ad_1]

    Elon Musk has long complained about Wikipedia, the crowd-sourced encyclopedia that’s considered a crown jewel of the internet. And now it seems like the billionaire is finally going to launch a competitor.

    “We are building Grokipedia @xAI,” Musk tweeted Tuesday. “Will be a massive improvement over Wikipedia. Frankly, it is a necessary step towards the xAI goal of understanding the Universe.”

    The Tesla CEO has previously insisted that Wikipedia is too “woke” and wants to offer an alternative encyclopedia with more right-wing “facts” about the world. Musk has referred to Wikipedia as “Wokipedia” at least half a dozen times in recent years.

    Musk, who spent over $270 million to get President Donald Trump elected, launched his artificial intelligence company xAI in 2023 and touts his AI chatbot Grok as a superior product. But Grok often presents facts about the world that Musk doesn’t agree with. And his attempts to tinker with Grok to become more right-wing have seen mixed success.

    Two high-profile incidents with Grok were an embarrassment for Musk, including the time Grok randomly responded to queries with unrelated conspiracy theories about white farmers being murdered in South Africa. A couple of months later, Grok went full Nazi, praising Hitler and endorsing the idea of rounding up Jews in concentration camps.

    Musk never took blame for making Grok so buggy, but both incidents came after the CEO complained on X that his AI chatbot wasn’t responding to factual questions the way that he wanted. And it seems likely that Grokipedia will almost certainly meet similar limitations.

    Musk and xAI haven’t released any information about how Grokipedia will operate, including whether it will be 100% AI-generated content. It’s also unclear whether Grokipedia will have dedicated URLs for various topics that anyone can visit or if it will be some kind of modified version of Grok that spits out answers to various questions.

    “Join @xAI and help build Grokipedia, an open source knowledge repository that is vastly better than Wikipedia! This will be available to the public with no limits on use,” Musk tweeted.

    If the user interface is just a chatbot like Grok, it’s unclear how that would be different from the Grok that now exists. xAI didn’t respond to questions on Tuesday about how Grokipedia would work, nor when it would launch.

    Musk also quote-tweeted Larry Sanger, a co-founder of Wikipedia, on Tuesday, who shared nine changes he wanted to see the dominant online encyclopedia adopt, including:

    1. End decision-making by “consensus”
    2. Enable competing articles
    3. Abolish source blacklists
    4. Revive the original neutrality policy
    5. Repeal “Ignore All Rules”
    6. Reveal who Wikipedia’s leaders are
    7. Let the public rate articles
    8. End permanent blocking
    9. Adopt a legislative process

    Musk called them “good suggestions.” Sanger left Wikipedia back in 2002 and hasn’t had any formal involvement with it since. He’s been a critic of the project for decades and recently appeared on Tucker Carlson’s show to whine about how it’s biased against conservatives.

    The magic of Wikipedia is that anyone can contribute information, cite a source, and it’s policed by contributors and editors who mostly try to keep things as objective and factual as possible while citing reliable sources. Generative AI tools like Grok create sentences by relying on their training data, and it’s difficult to tinker with the weights to prioritize a right-wing view of the world without going full Nazi. We saw that play out in real-time twice now at scale, all thanks to Musk.

    There are a lot of big questions that haven’t been answered about what Grokipedia will look like. But this wouldn’t be the first time conservatives have tried to launch their own Wikipedia competitor. Conservapedia was launched in 2006 and is widely regarded as a joke by anyone who tries to wade through its ridiculous articles.

    Even Conservapedia’s right-wing bias doesn’t seem to treat Musk very well, as you can see from this excerpt taken from his page at the site:

    Musk apparently does not hire conservatives in key positions, and is a cheerleader for giving foreigners top jobs in America. He wants expanded use of visas to import foreigners, and most of his Tesla cars are made in China. This is contrary to the America First position of MAGA supporters. Personally, Musk fathers many children without being a daily father to them.

    No wonder Musk is trying to build his own Wikipedia. Even the right-wing copycats don’t cut him much slack. At least Conservapedia did him the favor of not mentioning those two Nazi-style salutes on the day Trump was inaugurated.

    [ad_2]

    Matt Novak

    Source link

  • Jonathan Majors’s Interview, Wikipedia Plagiarism, and Apologies

    Jonathan Majors’s Interview, Wikipedia Plagiarism, and Apologies

    [ad_1]

    Van Lathan and Rachel Lindsay start today’s episode by bringing back a previous topic to talk about the disappointing update and regrets (02:48). They give their take on the internet’s reaction to Druski’s Omega Psi Phi–inspired skit (33:25) and Jonathan Majors’s interview (49:35). They are then joined by Molly White—researcher, writer, and Wikipedia editor—to give us more insight into the Neri Oxman plagiarism accusations (01:30:56).

    Hosts: Van Lathan and Rachel Lindsay
    Producers: Donnie Beacham Jr. and Ashleigh Smith
    Additional Production: Aleya Zenieris

    Subscribe: Spotify / Apple Podcasts / Stitcher

    [ad_2]

    Van Lathan

    Source link

  • ChatGPT Tops Wikipedia's 25 Most Visited Pages in 2023 | Entrepreneur

    ChatGPT Tops Wikipedia's 25 Most Visited Pages in 2023 | Entrepreneur

    [ad_1]

    The list of Wikipedia‘s top 25 most-viewed pages in 2023 is out — charting the curiosity of the internet and serving as a barometer of the world’s shared interests and concerns.

    OpenAI‘s ChatGPT took the spotlight with an impressive 49.4 million page views (out of more than 84 billion total views), according to the nonprofit Wikimedia Foundation.

    The wildly popular AI-driven chatbot set a new record for user base growth this year, attracting a whopping 100 million active users in January alone. The triumph of ChatGPT underlines a broader tech revolution, with companies investing billions into AI development and chip-making to power these future innovations.

    Related: Wikipedia Founder Says X Is ‘Overrun By Trolls and Lunatics’ — and Reveals How He Responded When Elon Musk Asked Him a Disturbing Question

    Earlier this year, OpenAI CEO Sam Altman told ABC News that AI might be “the greatest technology humanity has yet developed,” but that it also comes with real dangers. “We’ve got to be careful here,” he said. “I think people should be happy that we are a little bit scared of this.”

    Wikipedia’s list also reflected the substantial cultural and digital impact of Indian audiences, CNN reported. Among the top five were entries related to the 2023 Cricket World Cup and the Indian Premier League, resonating with the sport’s massive fan base in India. Indian cinema made an impact too, with Bollywood action movies Jawan and Pathaan outperforming American blockbusters such as Barbie and Avatar: The Way of Water in Wikipedia page views.

    Image Credit: AaronP/Bauer-Griffin | Getty Images

    Not to be overshadowed, subjects on influential personalities continued to captivate readers, from Taylor Swift‘s latest music conquests to Elon Musk‘s headline-making maneuvers, revealing the evergreen interest in celebrities and icons. The list was peppered with sports icons, acclaimed movies and pivotal global events, with the Russian invasion of Ukraine and controversial figure Andrew Tate rounding out the entries.

    Image Credit: Slaven Vlasic | Getty Images

    Related: The World Is Splitting Between Those Who Use ChatGPT to Get Better, Smarter, Richer — and Everyone Else

    Here’s the list of 25, ranked by number of page views:

    1. ChatGPT, 49,490,406 page views

    2. Deaths in 2023, 42,666,860

    3. 2023 Cricket World Cup, 38,171,653

    4. Indian Premier League, 32,012,810

    5. Oppenheimer (film), 28,348,248

    6. Cricket World Cup, 25,961,417

    7. J. Robert Oppenheimer, 25,672,469

    8. Jawan (film), 21,791,126

    9. 2023 Indian Premier League, 20,694,974

    10. Pathaan (film), 19,932,509

    11. The Last of Us (TV series), 19,791,789

    12. Taylor Swift, 19,418,385

    13. Barbie (film), 18,051,077

    14. Cristiano Ronaldo, 17,492,537

    15. Lionel Messi, 16,623,630

    16. Premier League, 16,604,669

    17. Matthew Perry, 16,454,666

    18. United States, 16,240,461

    19. Elon Musk, 14,370,395

    20. Avatar: The Way of Water, 14,303,116

    21. India, 13,850,178

    22. Lisa Marie Presley, 13,764,007

    23. Guardians of the Galaxy Vol. 3, 13,392,917

    24. Russian invasion of Ukraine, 12,798,866

    25. Andrew Tate, 12,728,616

    [ad_2]

    Amanda Breen

    Source link

  • Yoogli Announces Launch of “The World’s Research Library”

    Yoogli Announces Launch of “The World’s Research Library”

    [ad_1]

    Powered by Advanced Search Technology Used With Google and Other Search Engines to Find More Relevant Search Results

    Press Release



    updated: Apr 11, 2017

    Yoogli today announced the launch of “The World’s Research Library” powered by an advanced search technology that may be used with Google, Bing and Yahoo! to find more relevant search results.

    Joe Kerwin, co-founder and CEO said, “If you like Wikipedia you will love Yoogli. Our patented search technology enables users to discover deeper knowledge than is found in Wikipedia. Yoogli is 1,000 times larger than Wikipedia and includes databases from colleges and universities and the Library of Congress.”

    Dave Taylor, Chief Technology Officer, commented, “Yoogli is a technologically advanced search technology that matches complex queries with more exacting results. It is a ‘research engine’ where Google is a ‘popularity engine.’ Yoogli is able to correctly understand and analyze complete pages of text, documents, and URLs, and deliver more targeted and related results than keyword search. It is able to drill down deeper into a specific result continuously refining the desired result for the user.”

    Yoogli is a FREE research tool for high school and college students as well as research professionals.

    Contact: Rick Farano at rfarano@yoogli.com

    Source: Yoogli, Inc.

    [ad_2]

    Source link