Close Menu
    Facebook X (Twitter) Instagram
    • Business
    • Technology
    • Politics
    • Science
    • Security
    • Finance
    • Crime
    To The Moon Times
    • Business
    • Technology
    • Politics
    • Science
    • Security
    • Finance
    • Crime
    To The Moon Times
    Home ยป Wikipedia Bans Large Language Models From Article Creation
    Politics

    Wikipedia Bans Large Language Models From Article Creation

    By March 26, 2026No Comments4 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Quick Summary: Wikipedia has banned the use of large language models to write or rewrite articles, citing concerns over verifiability and reliable sourcing.

    Wikipedia has updated its editorial guidelines to prohibit the use of large language models (LLMs) for generating or rewriting article content. The policy change reflects mounting concern within the Wikipedia community that AI-produced text frequently conflicts with the platform’s core standards, particularly those related to verifiability and sourcing. The update states directly that text generated by LLMs often violates several of Wikipedia’s foundational content policies. The prohibition applies broadly to article creation and revision.

    The new policy does permit a narrow range of AI-assisted tasks. Editors may use AI tools to suggest basic copy edits to their own writing, as long as the system does not introduce new information, though editors are advised to review such suggestions carefully. AI tools may also be used to translate articles from other language editions into English, provided editors verify the accuracy of the source text. These exceptions are limited and clearly defined within the updated guidelines.

    Although the policy does not specify direct penalties for using AI-generated content, Wikipedia’s existing guidelines on disruptive editing still apply. Repeated misuse can constitute a pattern of disruptive editing, potentially resulting in a block or ban. Editors do have a path to reinstate their accounts through an appeal process, which may involve agreement from the blocking administrator, an override by other administrators, or in rare cases, an appeal to the Arbitration Committee.

    Emily M. Bender, a professor of linguistics at the University of Washington, told Decrypt that some AI applications in editing tools can be reasonable, such as spell checkers or grammar checkers. However, she said the line becomes problematic when systems move from correcting text to altering or generating content. She noted that large language models lack the accountability that human contributors bring to collaborative knowledge projects. “Using large language models to produce synthetic text, it is a fundamental property of these systems that there is no accountability, no connection to what someone believes or stands behind,” she said.

    Bender also warned that widespread use of AI-generated edits could damage Wikipedia’s reputation. She argued that editors taking shortcuts to produce content that merely resembles a Wikipedia article degrades the overall value of the site. Her comments underscore a broader concern that AI tools, even when producing plausible-sounding text, do not operate from belief or accountability in the way human editors do.

    Joseph Reagle, associate professor of communication studies at Northeastern University and a researcher of Wikipedia’s culture and governance, said the community’s response reflects longstanding concerns about accuracy. He pointed to AI limitations such as hallucinated claims and fabricated sources as key reasons why Wikipedia remains wary of AI-generated prose. Reagle also noted that many large language models have been trained on Wikipedia content, adding a layer of complexity to the relationship between the platform and AI developers.

    In October, the Wikimedia Foundation reported that human visits to Wikipedia fell approximately 8% year over year, as search engines and chatbots increasingly provide answers directly on their platforms rather than directing users to the site. In January, the Wikimedia Foundation announced licensing agreements with AI companies including Microsoft, Google, Amazon, and Meta, permitting them to use Wikipedia material through its Enterprise product. Reagle noted that despite these agreements, many Wikipedia editors remain uncomfortable with services that draw on community-produced content while generating what he described as a consequent influx of low-quality AI material.

    The updated policy also cautions editors against relying solely on writing style to identify AI-generated contributions. Instead, editors are directed to assess whether the content complies with Wikipedia’s core policies and to consider the contributor’s recent editing history. The policy acknowledges that some human editors may naturally write in ways that resemble LLM output, and states that stylistic or linguistic signs alone are insufficient to justify sanctions against a contributor.

    Originally reported by Decrypt.

    ai-generated-content artificial-intelligence content-moderation emily-bender joseph-reagle large-language-models wikimedia-foundation wikipedia
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    SEC Oversight of Digital Assets Faces Congressional Scrutiny

    March 26, 2026

    Bitcoin Recovers as Trump Extends Iran Attack Pause

    March 26, 2026

    OpenAI Abandons Erotic Chatbot Feature for ChatGPT

    March 26, 2026

    Trump DOJ Prosecutes Crypto Developers Despite Privacy Promises

    March 26, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    © 2026 To The Moon Times.

    Type above and press Enter to search. Press Esc to cancel.

    • bitcoinBitcoin(BTC)$68,913.55-2.93%
    • ethereumEthereum(ETH)$2,068.16-4.42%
    • tetherTether USDt(USDT)$1.00-0.02%
    • binancecoinBNB(BNB)$630.65-2.55%
    • rippleXRP(XRP)$1.37-3.30%
    • usd-coinUSDC(USDC)$1.000.02%
    • solanaSolana(SOL)$86.71-4.96%
    • tronTRON(TRX)$0.311004-1.16%
    • dogecoinDogecoin(DOGE)$0.092339-3.98%
    • hyperliquidHyperliquid(HYPE)$38.99-2.79%