[1]: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
"Thus the highly specific "inventor of the first train-coupling device" might become "a revolutionary titan of industry." It is like shouting louder and louder that a portrait shows a uniquely important person, while the portrait itself is fading from a sharp photograph into a blurry, generic sketch. The subject becomes simultaneously less specific and more exaggerated."
Lots of those points seems to get into the same idea which seems like a good balance. It's the language itself that is problematic, not how the text itself came to be, so makes sense to 100% target what language the text is.
Hopefully those guidelines make all text on Wikipedia better, not just LLM produced ones, because they seem like generally good guidelines even outside the context of LLMs.
"Peacock example:
Bob Dylan is the defining figure of the 1960s counterculture and a brilliant songwriter.
Just the facts:
Dylan was included in Time's 100: The Most Important People of the Century, in which he was called "master poet, caustic social critic and intrepid, guiding spirit of the counterculture generation". By the mid-1970s, his songs had been covered by hundreds of other artists."
[1]: https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style
[2]: https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Word...
From my experience with LLMs that's a great observation.
Recently there has been a big push into geospatial foundation models (e.g. Google AlphaEarth, IBM Terramind, Clay).
These take in vast amounts of satellite data and with the usual Autoencoder architecture try and build embedding spaces which contain meaningful semantic features.
The issue at the moment is that in the benchmark suites (https://github.com/VMarsocci/pangaea-bench), only a few of these foundation models have recently started to surpass the basic U-Net in some of the tasks.
There's also an observation by one of the authors of the Major-TOM model, which also provides satellite input data to train models, that the scale rule does not seem to hold for geospatial foundation models, in that more data does not seem to result in better models.
My (completely unsupported) theory on why that is, is that unlike writing or coding, in satellite data you are often looking for the needle in the haystack. You do not want what has been done thousands of times before and was proven to work. Segmenting out forests and water? Sure, easy. These models have seen millions of examples of forests and water. But most often we are interested in things that are much, much rarer. Flooding, Wildfire, Earthquakes, Landslides, Destroyed buildings, new Airstrips in the Amazon, etc. etc.. But as I see it, the currently used frameworks do not support that very well.
But I'd be curious how others see this, who might be more knowledgeable in the area.
<https://www.newyorker.com/tech/annals-of-technology/chatgpt-...>
I had a bad experience at a shitty airport, went to google maps to leave a bad review, and found that its rating was 4.7 by many thousand people. Knowing that airport is run by corrupt government, I started reading those super positive reviews and the other older reviews by them. People who could barely manage few coherent sentences of English are now writing multiple paragraphs about history and vital importance of that airport in that region.
Reading first section "Undue emphasis on significance" those fake reviews is all I can think of.
[0]: https://ammil.industries/signs-of-ai-writing-a-vale-ruleset/ [1]: https://vale.sh/
_sigh_ Is it though, Claude, is it really?
I'm more thinking about startups for fine-tuning.
I can totally see someone taking that page and throwing it into whatever bot and going "Make up a comprehensive style guide that does the opposite of whatever is mentioned here".
Curious, what are the signs that this particular page has been written by an AI?
I’m not saying it wasn’t, I’m probably not seeing something and wondering what to look for.
https://arxiv.org/abs/2509.23233
I wonder if something more came out of that.
Either way, I think that generation of article text is the least useful and interesting way to use AI on Wikipedia. It's much better to do things like this paper did.
I think the biggest opportunity is building a knowledge graph based on Wikipedia and then checking against the graph when new edits are made. Detect any new assertions in the edit, check for conflicts against the graph, and bring up a warning along with a link to all the pages on Wikipedia that the new edit is contradicting. If the new edit is bad, it shows the editor why with citations, and if the new edit is correcting something that Wikipedia currently gets incorrect, then it shows all the other places that also need to be corrected.
https://www.reddit.com/r/LocalLLaMA/comments/1eqohpm/if_some...
Sounds pretty relevant
This works because GPT 5.x actually properly use web search.
As a technique though, never ask an LLM to find errors. Ask it to either find errors or verify that there are no errors. That way it can answer without hallucinating more easily.
What I do is both ask it to explain why there are no errors at all and why there tons of errors. Then I use my natural intelligence to reason about the different claims.
A quote for the times.
May be a bit of a sisyphean task, though...
https://wikimediafoundation.org/news/2026/01/15/wikipedia-ce...
I would like a similar pre-LLM Wikipedia snapshot. Sometimes I would prefer potentially stale or incomplete info rather than have to wade through slop.
I'm not sure if it's real or not, but the Internet Archive has a listing claiming to be the dump from May 2022: https://archive.org/details/wikipedia_en_all_maxi_2022-05
And I say that as a general Wikipedia fan.
I've made a bunch of nontrivial changes (+- 1000s of characters), none of them seems to have been reverted, never asked for permission, I just went ahead and did it. Maybe the topics I care about are so non-controversial no one actually seen it?
If you mean the left leaning tone / bias, that will be a bit more spicy. But general grammar, tone, ambiguity , superlatives – that’s the goal of copy editing.
I copy edit typesetting , for example.
A major flaw of Wikipedia is that much of it is simply poorly written. Repetition and redundancy, ambiguity, illogical ordering of content, rambling sentences, opaque grammar. That should not be surprising. Writing clear prose is a skill that most people do not have, and Wikipedia articles are generally the fruit of collaboration without copy editors.
AI is perfectly suited to fixing this problem. I recently spent several hours rewriting a somewhat important article. I did not add or subtract information from the article, I simply made it clearer and more concise. I came away convinced that AI could have done as good a job - with supervision, of course - in a fraction of the time. AI-assisted copy-editing is not against Wikipedia rules. Yet as things stand, there are no built-in tools to facilitate it, doubtless because of the ambient suspicion of AI as a technology. We need to take a smarter approach.
I'm confused by this. Is this written by an AI?
> Repetition and redundancy, ambiguity, illogical ordering of content, rambling sentences, opaque grammar.
This pile of words is missing a verb.
"You" (whoever that is, human or not) edited a single article, and that experience convinced you that "AI is perfectly suited to fixing this problem"?
Ironically, the lack of evidence to support such a strong assertion is one of the key problems with AI writing in general.
The idea that you could edit an article extensively without adding or subtracting information is facile. I would love to see this edit.
With a good model , fine tuning & supervision AI can produce stellar content.
AI is at least a thousand tools. It’s a mistake to write it off so trivially.
* no new Articles from LLM content (WP:NEWLLM)
* Most images wholly generated by AI should not be used." (WP:AILLM)
* “it is within admins' and closers' discretion to discount, strike, or collapse obvious use of generative LLMs" (WP:AITALK)
There doesn’t seem to be an outright ban on LLM content as long as it’s high quality .
Just an amateur summary for those less familiar with Wikipedia policy. I encourage people to open an account, edit some pages and engage in the community. It’s the single most influential piece of media that’s syndicated into billions of views daily, often without attribution.
On PickiPedia (bluegrass wiki - pickipedia.xyz), we've developed a mediawiki extension / middleware that works as an MCP server, and causes all of the contributions from the AI in question to appear as partially grayed out, with a "verify" button. A human can then verify and either confirm the provided source or supply their own.
It started as a fork of a mediawiki MCP server.
It works pretty nicely.
Of course it's only viable in situations where the operator of the LLM is willing to comply / be transparent about that use. So it doesn't address the bulk of the problem on WikiPedia.
But still might be interesting to some:
That's why they're cataloging specific traits that are common in AI-generated text, and only deleting if it either contains very obvious indicators that could never legitimately appear in a real article ("Absolutely! Here is an article written in the style of Wikipedia:") or violates other policies (like missing or incorrect citations).
I'm a embarrassed to be associated with US Millennials who are anti AI.
No one cares if you tie your legs together and finish a marathon in 12 hours. Just finish it in 3. Its more impressive.
EDIT:
I suppose people missed the first sentence:
>Isn't having a source the only thing that should be required.
>Isn't having a source the only thing that should be required.
>Isn't having a source the only thing that should be required.
And AI still can make up things, which might be fine in some random internet-comment, or some irrelevant article about something irrelevant happening somewhere in the world, but not with a knowledge-vault like Wikipedia.
And, we are talking here about Wikipedia. They are not just checking for AI, they are checking everything from everyone and have many many rules to ensure a certain level of quality. They can't check everything at once and fetch all problems immediately, but they are working step by step and over time.
> I'm a embarrassed to be associated with US Millennials who are anti AI.
You should be embarrassed for making such a statement.
No, referencing and discussing it properly whilst retaining the tone and inferred meaning are equally as important. I can cite anything as a source that I want, but if I use it incorrectly or my analysis misses the point of the source then the reference source itself is pointless.
It's not inherently bad, but if something was written with AI the chances that it is low effort crap are much much much higher than if someone actually spent time and effort on it.