Show HN: Hallucinopedia
151 points
9 hours ago
| 44 comments
| halupedia.com
| HN
driggs
8 hours ago
[-]
This is fantastic. I couldn't find any obvious way to search for a new page, but you can simply bang out any arbitrary URL slug and the new article will be hallucinated fresh, eg:

https://halupedia.com/shortest-cave-in-the-world

https://halupedia.com/echolocation-ability-in-spiders

reply
bstrama
8 hours ago
[-]
Exactly, but I consider adding fake search that could find you ANY article, including not existent ones
reply
lxgr
6 hours ago
[-]
All articles exist, some just haven't been discovered yet ;)
reply
nlehuen
4 hours ago
[-]
This is excellent, congrats!

FYI I manually created this page and some link markup looks malformed: https://halupedia.com/list-of-uninhabited-countries

reply
nlehuen
4 hours ago
[-]
Looks like some single quote escaping issue? I suspect the first link to be "Archduke Ferdinand VII's Bureau of Non-Demographic Surveys" and the apostrophe breaks the link.
reply
mmooss
7 hours ago
[-]
Yes, that would be the perfect touch. This is brilliant satire. We need more satire!
reply
joeross
1 hour ago
[-]
This is wonderful. I just spat out the first phrase that came to my mind and boom:

https://halupedia.com/liminal-darkbeast

reply
anthonycoslett
1 hour ago
[-]
I'm cackling at some of these - what a perfect way to put down the phone and get lost in a world of weird. We are indeed in a simulation LOL
reply
gerdesj
2 hours ago
[-]
Hit the Stumble link at the top right of all pages - its as good as a search when the whole thing is made up!
reply
nonrecursive
2 hours ago
[-]
reply
nlehuen
3 hours ago
[-]
reply
layer8
2 hours ago
[-]
The model seems to have an unhealthy obsession with fungi: https://halupedia.com/alan-turing

Which I guess makes some sense for a hallucinopedia.

reply
pivot_root
1 hour ago
[-]
I made an SCP foundation inspired page: https://halupedia.com/hard-to-detroy-reptile

My favorite link generated there is the Institute for Unyielding Biology: https://halupedia.com/institute-for-unyielding-biology

reply
petercooper
9 hours ago
[-]
Give it a week and see what Google AI Overview has to say about the Great Pigeon Census of 1887!
reply
aDyslecticCrow
3 hours ago
[-]
google is already on it when asking about "The Great Pigeon Census of 1887"

using 1886 or 1888 makes Google correctly identify that no such sensus exist.

asking about 1887 specifically makes Google refer to some supposed great effort to track passenger pigeon population mids of the species decline.

reply
NordStreamYacht
1 hour ago
[-]
By Featherton, no less.
reply
stavros
4 hours ago
[-]
I made the same thing months ago, so you don't need to wait:

https://encyclopedai.stavros.io

reply
cachius
3 hours ago
[-]
There's another one! https://grokipedia.com/
reply
stavros
3 hours ago
[-]
Ah yes, IIRC I got the idea to make mine to make fun of that one when I heard the name.
reply
gojomo
4 hours ago
[-]
I searched your site for [Great Pigeon Census of 1887] and was only returned articles anout other things.
reply
stavros
3 hours ago
[-]
reply
gojomo
3 hours ago
[-]
As it didn't generate that when I typed the title i to your search box, was there a bug now fixed? Or did you use some other path not evident on the page you linked to generate it?
reply
stavros
3 hours ago
[-]
There was a bug where scanning took too long with the thousands of articles in there, but I just fixed it.

You can also just type a random URL and visit it, it'll generate an article. That's what I did before I fixed the search issue, and I usually just do that to avoid the search route.

reply
Noumenon72
3 hours ago
[-]
So by "I made the same thing months ago" you didn't mean "an article about the great pigeon census" (your link is created May 6) or "an encyclopedia of hallucinations" like the OP, but just "an encyclopedia with some articles AI wrote". What's the point?
reply
stavros
3 hours ago
[-]
What's the difference between an encyclopedia that produces AI articles on demand and an encyclopedia that produces AI articles on demand?
reply
gojomo
3 hours ago
[-]
If you think that's all the Hallucinopedia is, you're misunderstanding it.

One hint – check out its prompt, and how it makes its articles so different than those of your project: https://news.ycombinator.com/edit?id=48042306

reply
HomeDeLaPot
1 hour ago
[-]
reply
diputsmonro
8 hours ago
[-]
It's pretty fun to poke at! Although it's certainly difficult to be exact, it would be neat if generated pages used the context of the pages they were linked from (ideally, all pages that link to it) to guide the direction of the page. From the ones I generated it seemed they were mostly independent.
reply
bstrama
4 hours ago
[-]
Update: Implemented it. All new articles work that way
reply
driggs
42 minutes ago
[-]
That really improved things! Now each rabbithole goes deeper and deeper and deeper...
reply
rjmill
2 hours ago
[-]
Very nice! Independently of this thread, I was delighted to discover the cross references between pages. It makes a big difference.
reply
bstrama
8 hours ago
[-]
Yeah, thought about that, maybe will implement it. Will keep in mind! For now SSR to feed LLMs' the priority
reply
jagged-chisel
1 hour ago
[-]
It’s been defaced. It’s already got sex crimes and antisemitism all over the place.
reply
wavemode
1 hour ago
[-]
The mistake they made was allowing visitors to trigger the generation of articles via visiting any arbitrary URL.

A more resilient concept would have been, have a few "seed" articles in place, and then only allow for the creation of new articles by clicking a link in an existing article.

reply
rootusrootus
1 hour ago
[-]
Just in the comments, right? That is where I see it. If I were the site owner I would just turn comments off. It was a cute idea when someone on HN suggested it, but without moderation open commenting becomes a cesspool in a hurry.
reply
edaemon
41 minutes ago
[-]
Took me two clicks of the "Stumble" functionality to hit unsavory stuff that someone clearly made on purpose.
reply
whycombinetor
57 minutes ago
[-]
Try clicking "Stumble" a few times...
reply
rootusrootus
32 minutes ago
[-]
Yeah I see that now. Also clicking on the all entries list shows pages of garbage. Just takes a few sucky people to ruin things.
reply
driggs
47 minutes ago
[-]
This is why we can't have nice things.

Looks like someone scripted `curl` in a loop and generated thousands of permutations of hate content.

reply
solarkraft
8 hours ago
[-]
Finally a more trustworthy version of Grokipedia!
reply
bstrama
8 hours ago
[-]
It's hilarious, you made my day hahah
reply
LeoPanthera
8 hours ago
[-]
I honestly forgot that Grokipedia existed. Did anyone ever use it?
reply
bstrama
8 hours ago
[-]
Tried once, but was useless. Very funny that it had so many text, while Elon is apparently "huge" fan of short and precise communication...
reply
mmooss
7 hours ago
[-]
Somebody showed me it appearing near the top of some of their DuckDuckGo queries.
reply
bstrama
7 hours ago
[-]
UPDATE: Just now, comment section added. Have a nice time arguing!
reply
dlcarrier
7 hours ago
[-]
You are a wonderful person.

You not only made this excellent source of entertainment, you are also helped everyone find their unmatched socks, ensuring that "no individual would ever be forced to wear a mismatched pair". (Source: https://halupedia.com/humanitarian-accomplishments-of-the-on...

reply
lxgr
6 hours ago
[-]
We should really host another one though; I think I've since lost a few more.
reply
segh
3 hours ago
[-]
I'm curious, what is the LLM cost of the website?
reply
drob518
2 hours ago
[-]
I’m curious, too. But it could probably run locally with a small model, right? The performance is stellar, so that suggests some hardware acceleration is being used, but that could all be a local system.
reply
lxgr
8 hours ago
[-]
Ironically, this seems much faster (for pages already, erm, "researched") than the real one! How?
reply
bstrama
8 hours ago
[-]
It generates articles only once. So once it's generated, it never perish. Logic looks like: If article exist -> show it If not -> generate and save
reply
lxgr
8 hours ago
[-]
I get that, but how does it serve the generated and cached ones seemingly faster than Wikipedia? (My guess is that single-page applications, which this one seems to be, just need less round trips between navigations or something?)
reply
bstrama
4 hours ago
[-]
Also now that I think, we store articles in decwntralized cloudflare KV store and access from serverless workers running also on their servers.

That could be the thing behind it being so quick.

Cloudflare workers have 1ms cold start.

reply
lxgr
4 hours ago
[-]
Nice job, this is seriously one of the fastest websites I've ever used!

I feel like I have some minimum latency "priced in" to my expectation when I click a link on a static site, so yours feels uncannily like it's somehow able to anticipate my clicks, adding to the surreal atmosphere.

reply
bstrama
8 hours ago
[-]
Yep, just a react. Also we use gemini 2.5 flash lite, so it's fast, cheap and dumb.
reply
lxgr
7 hours ago
[-]
Nice, that's what I used for by LLM-backed HTTP server [1] a while ago as well :) It's a shame they got rid of the generous free quota a while ago, which is why I had to shut my public instance down.

[1] https://github.com/lxgr/vibeserver/

reply
JSR_FDED
1 hour ago
[-]
Absolutely perfect. Monty Python on demand.
reply
soupspaces
2 hours ago
[-]
reply
n00bskoolbus
4 hours ago
[-]
One suggestion for improvement is avoiding creation of self referential links. For example https://halupedia.com/chaldic-arithmetic has many references links to itself.
reply
JohnMakin
8 hours ago
[-]
Funny, but you could argue this is actively harmful to the web.
reply
SwellJoe
4 hours ago
[-]
I wouldn't. And, I'd think less of anyone who does make that argument.

Anyone of reasonable intelligence can easily tell this is a parody of an encyclopedia. Saying this is bad for the web is like saying The Onion is bad for the web.

reply
Eisenstein
4 hours ago
[-]
What would you think of a person who said that they are already convinced that an opposing view could not be correct without even hearing the arguments for it?
reply
janalsncm
2 hours ago
[-]
For the record,

> Funny, but you could argue this is actively harmful to the web.

Was not followed by an actual argument that it is harmful to the web. The comment was an assertion, not an argument.

So we are left in the inconvenient position of rejecting hypothetical arguments, and others defending the philosophical possibility that a valid argument does exist.

reply
Eisenstein
1 hour ago
[-]
Without the argument being explicit then there can be no retort to it, so closing your mind before hearing it demonstrates that the argument itself is irrelevant. One could thus conclude that the existence of a valid argument is not itself a condition for my question.
reply
janalsncm
1 hour ago
[-]
We also shouldn’t close our minds to the possibility of an eigen-retort, one which covers all possible arguments already made or argued in the future regarding the consequences of this website on the health of the Internet.

Someone who is aware of the eigen-retort would therefore not need to hear the argument.

Since I haven’t heard either the hypothetical argument or the hypothetical eigen-retort yet, I’ll withhold my judgement.

reply
Eisenstein
39 minutes ago
[-]
I concede that the my question was loaded, but the assumptions behind it are grounded in practical experience. Regardless, I have not committed myself either to the existence of an argument, I just stated that its existence was not a condition for the validity of my question for SwellJoe. The statement which was made can mean a number of possible things, but we cannot know what unless the question is answered. So the existence of the retort is revealed by the question, and until that reveal we are limited to questions or assumptions.
reply
SwellJoe
56 minutes ago
[-]
I'm reasonably confident there is no argument that I would buy.

I hate AI slop more than average, but this is not slop being injected into human places. This is a dedicated dumping ground for slop, paid for by the owner/instigator of said slop. I don't have to go there, and it's not trying to fool anyone and no one will be fooled by it.

AI slop on a forum or social media or on facebook convincing boomers that a black person slapped a cop or whatever racist garbage they're being fed today? Fetch the guillotine.

AI slop as part of a dumb art project on somebody's personal website that isn't trying to manipulate or mislead? Have at it. Go nuts. It's your press, print as many pages of slop as you like.

So, I have exhaustively covered the possible arguments I can come up with for why this could be "actively harmful for the web", and rejected them outright.

reply
Eisenstein
34 minutes ago
[-]
That clarifies things much better than the original statement, but rejecting arguments you have conceived of which fail does not preclude the existence of those that do not, and thus the original question still remains.
reply
anonymousiam
7 hours ago
[-]
It's probably only harmful to the AI scrapers that train from the web. Most people will understand the purpose of this -- to poison LLM training in a humorous way, which is really easy to do. It exemplifies a major weakness in modern day AI.
reply
gojomo
4 hours ago
[-]
This is unlikely to poison any LLMs, and unless the author says so, it is unlikely that their motivation is to poison LLMs, as opposed to providing whimsical entertainment.
reply
bstrama
4 hours ago
[-]
I were just drunk and idea seemed funny. That's the idea behind haha.

But either way can't wait to see google ai overview cite us.

reply
dylan604
3 hours ago
[-]
reply
gojomo
3 hours ago
[-]
Musing about a possibly-funny consequence isn't the same as the motivating reason, which I read as more whimsical from:

https://news.ycombinator.com/item?id=48042594

In particular, someone who was seeking training-set pollution likely wouldn't make the fanciful fabrications so blatant, nor open-source their prompt:

https://news.ycombinator.com/item?id=48038257

reply
dayofthedaleks
8 hours ago
[-]
You could also argue that the web has failed and poisoning it into irrelevance is a vital service, motivating humans to collect knowledge into immutable sources. We‘ll call them ‘libraries.’
reply
r3trohack3r
8 hours ago
[-]
Interesting, but you could argue comments like this are actively harmful to the web.
reply
AlecSchueler
8 hours ago
[-]
But the argument wouldn't be nearly as strong.
reply
dymk
4 hours ago
[-]
Hard to say when nobody is actually offering arguments
reply
isoprophlex
8 hours ago
[-]
The sooner the current web dies, the better. Something better either rises from its ashes, or we lose... something that was already lost.
reply
b00ty4breakfast
8 hours ago
[-]
or something way worse shows up.
reply
JohnMakin
8 hours ago
[-]
Yea, I'm not sure how the "this is really bad so let's make it worse" argument really makes any sense
reply
dylan604
3 hours ago
[-]
When you get the something worse, the previous suddenly becomes much less worse. With the help of wrapping your memories with "remember when" nostalgia making things much more palatable, the something worse suddenly makes the previous better if not good.
reply
znort_
7 hours ago
[-]
context. sometimes things simply have to be broken to give way for something better. ymmv.
reply
b00ty4breakfast
6 hours ago
[-]
I think there's an unexamined assumption here that "the next thing" is always going to be an improvement but there is no, non-ideological reason to hold to this assumption. Ideally, we would be actively working towards making it so but what often happens is passively riding the current and calling it "progress".
reply
znort_
1 hour ago
[-]
>unexamined assumption here that "the next thing" is always going to be an improvement but there is no, non-ideological reason to hold to this assumption

i'm not making that assumption at all, so whatever.

context: revolutions? if slop is a problem but is barely enough of a problem to collectively do something about it maybe letting it get out of hand would be a good motivation.

i'm not advocating for this, just providing it as a possible context where the "this is really bad so let's make it worse" argument could "make sense".

progress isn't just a technical issue, it involves people and people need motivation.

reply
lxgr
8 hours ago
[-]
On the other hand, one could argue that anything that can be destroyed by relatively clearly labeled satire, deserves to be.
reply
gojomo
4 hours ago
[-]
A web that is vulnerable to this would already be as good as dead.

As an entertaining way to highlight the importance of upgrading our ways of knowing, playful (& open-source!) projects like this are likely to strengthen the web.

reply
wildzzz
6 hours ago
[-]
Any training data scraper that blindly takes stuff from websites deserves to have their model poisoned by this nonsense.
reply
stronglikedan
8 hours ago
[-]
> you could argue

Could you? I don't see it happening, but I could be wrong.

reply
janalsncm
2 hours ago
[-]
You could, in the sense that it’s not illegal or impossible. I haven’t seen anyone attempt it though.

You could argue that a person could argue any point, but I’d prefer people make the argument rather than argue about arguing it.

reply
parliament32
8 hours ago
[-]
To the web? It's fantastic for the web, these are the kinds of fun projects that make the web a worthwhile place to be. To slop generators? Yes, absolutely harmful, and that's for the best.
reply
slig
8 hours ago
[-]
Grokipedia is already doing that.
reply
Jtarii
8 hours ago
[-]
Pissing on a pile of shit
reply
pluc
57 minutes ago
[-]
Why isn't this .gov
reply
driggs
7 hours ago
[-]
This site is going to be expensive when a web crawler hits it. A honey pot that burns tokens.
reply
janalsncm
2 hours ago
[-]
They’re caching the pages which have already been generated. You could go back and delete all references to pages which don’t exist yet. Basically turn it into a static website.
reply
driggs
1 hour ago
[-]
It seems like the site's algorithm is that every newly-generate page includes multiple links to not-yet-existing pages. So it doesn't matter that existing pages are cached, all the "leaf node" pages link to multiple uncached new pages.
reply
janalsncm
1 hour ago
[-]
I’m suggesting to turn that off and prune the links to pages which weren’t generated yet if cost becomes an issue.
reply
jakub_g
4 hours ago
[-]
Reminded me of this old, pre-LLM git docs generator:

https://git-man-page-generator.lokaltog.net/

reply
anthk
4 hours ago
[-]
Plan 9/9front's bullshit(1) tool works kinda like these but without requiring an $6k machine.
reply
cachius
3 hours ago
[-]
reply
jdpage
3 hours ago
[-]
Reminds me of a (perhaps) more fanciful risk of fictional encyclopaedias: https://sites.evergreen.edu/politicalshakespeares/wp-content...
reply
bstrama
3 hours ago
[-]
Actually interesting response. You can also check out github.
reply
drob518
2 hours ago
[-]
I love it. What’s the rough architecture of the system (using cloud LLM and paying $$$, or local)? The performance for new entries is really good. What is the prompt for each entry and how do you keep the steampunk vibe going?
reply
nlehuen
3 hours ago
[-]
reply
bstrama
8 hours ago
[-]
Can't wait to see the next generation of LLMs after feeding it all of that hahaha
reply
everyos_
8 hours ago
[-]
The page requires JS to load its content - user agents without JS support just get a blank page.

I'm not sure if the bots that scrape data to train LLMs are capable of loading that type of page, or if they only work on pages that have the content inside the HTML itself?

reply
aDyslecticCrow
3 hours ago
[-]
Not using JavaScript would also make the crawler fail on squarespace and wix website builders.

The age where the web was usable at all without JavaScript is long gone. No scraper would get much scraping done without JavaScript these days.

reply
replygirl
8 hours ago
[-]
any serious scraping service these days will fail over to a headless browser when it fetches an asset referencing a js bundle that isn't verifiably a vendor script
reply
bstrama
8 hours ago
[-]
I'm aware and will implement SSR soon ;)
reply
m3047
8 hours ago
[-]
It's entirely possible they simply ingest the JS as-is.
reply
nickvec
8 hours ago
[-]
Seeing “Something broke, which is ironic for a made-up encyclopedia: Load failed” when trying to access some of the suggested starting points
reply
bstrama
8 hours ago
[-]
Works on my PC.

Could you gimme the url that's failing?

reply
nickvec
2 hours ago
[-]
It’s working now, not sure what was going on earlier.
reply
winocm
3 hours ago
[-]
reply
janwillemb
8 hours ago
[-]
It's nice, but after a few clicks my LLM content fatigue kicks in.
reply
berellevy
1 hour ago
[-]
Lots of antisemitism on there. Search “Jews”
reply
RIMR
1 hour ago
[-]
The All Entries (https://halupedia.com/all-entries) part of the site is a bit alarming. I think OP might want to do a little bit of basic automoderation here.
reply
rootusrootus
55 minutes ago
[-]
In today's world it does not take long to be reminded that we cannot have nice things. Or maybe the gov't has their own bot army to wreak havoc and convince voters that actually, we really do want privacy-ending ID verification laws after all.
reply
rootusrootus
4 hours ago
[-]
I wonder how long it will be before Canis dementialis becomes a standalone meme.
reply
throw310822
7 hours ago
[-]
Funny. Small improvement suggestion: the entry about "Glorbonian culinary arts" links to "the subterranean nation of Glorbonia". However upon clicking the link to "Glorbonia", an entry is generated claiming that "Glorbonia refers to a peculiar and largely uncatalogued form of sub-auditory resonance". It would be cool if some context were carried over from the referrer page so that there is some coherence between entries (ah, and some existing entries could be taken in account when generating new ones).
reply
notahacker
4 hours ago
[-]
Feels like this will eventually cause collisions, although perhaps nothing multiple definitions of Glorbonia and multiple biographies of different Mrs Wiggles (perhaps with Wikipedia style disambiguation) can't solve
reply
throw310822
4 hours ago
[-]
Btw, I've noticed just now that Glorbonia is, in the first entry, a "subterranean nation" and in the second it's a "sub-auditory resonance". So I got curious and I asked Opus what he thinks about the word Glorbonia: "Do you detect in the word a sense of place? North, south, east, west, up, down?". And Opus answers "Down, weirdly. Or maybe low — something subterranean, or at least sunken." Curious.
reply
arduanika
8 hours ago
[-]
Love it! It feels very Borges!

Feature request: also be able to click on the Talk page to see the controversies. I don't always want to trust the article itself as the final word.

Edit: Oh look, there's an article about the YC! https://halupedia.com/y-combinator

reply
bstrama
7 hours ago
[-]
Just added comment section :)
reply
rootusrootus
59 minutes ago
[-]
Which now has ascii penises and other art and ... colorful commentary.
reply
arduanika
1 hour ago
[-]
Cool!

I'm curious about the design. Maybe you have a "how I did it" post coming soon, or something. One question: Did you find away to get some convergence, where a newly generated page will tend to cite pages (or stubs, at least) that already exist in the universe? Seems hard to do it with generated text, but not impossible.

reply
bstrama
8 hours ago
[-]
Great suggestion! Will immediately look into that!
reply
mmooss
7 hours ago
[-]
> Edit: Oh look, there's an article about the YC! https://halupedia.com/y-combinator

This should be on YC's About page.

reply
notahacker
4 hours ago
[-]
> Y Combinator might be responsible for the spontaneous generation of minor deities in areas experiencing extreme metaphysical gravity.

This particular piece of slop is a serendipitously brilliant description of the cult of founder worship in the metaphysical gravity of Silicon Valley.

reply
anthk
4 hours ago
[-]
This kind of Absurdist humour reminds me of the Marx Brothers or the Tip y Coll Spaniards.

And the Sokal case with the Humanities branches, for sure.

BTW: https://halupedia.com/postmodernism

This is golden.

https://halupedia.com/paradox

Best entry, hands down. This is a love letter to Prattchett.

reply
arduanika
1 hour ago
[-]
It also feels a bit like Sam Kriss, if you know him.

Some of his writing: https://samkriss.substack.com/p/five-prophets

His biography is quite interesting: https://halupedia.com/sam-kriss

reply
meghneelgore
8 hours ago
[-]
Great idea! I created an adjacent website that gives, shall we say, "alternative facts" about your questions. (don't know if the rules allow me to link the site so I won't).
reply
busymom0
7 hours ago
[-]
Now I want to know the site.
reply
meghneelgore
3 hours ago
[-]
https://amtaitfy.com Still don't know if it's allowed, but taking a chance here.
reply
anthk
4 hours ago
[-]
https://halupedia.com/computer

This is perfect. Very Neal Stephensony.

Also, this, but with no AI: https://ifdb.org/viewgame?id=032krqe6bjn5au78

Just incredible prose and writing (and gameplay), with something you can run with Frotz/NFrotz/LectRote or any ZMachine interpreter (or Glulxe like Gargoyle). A Pentium would run this and marvel you in a similar way.

No need to waste tons of water in datacenters.

reply
sofayam
7 hours ago
[-]
Currently breaks if you try to create a page with a Japanese slug. Multiple languages would make this an even more valuable resource than it already is.
reply
pinkmuffinere
4 hours ago
[-]
I find the handling of NSFW topics (and how it avoids making them nsfw) really interesting. Eg https://halupedia.com/fuck (aside from the title it seems SFW to me)
reply
bstrama
4 hours ago
[-]
Best part - I didn't implement such logic. It just for some reason works that way.
reply
pinkmuffinere
4 hours ago
[-]
Huh that is interesting, I was expecting it to show some sort of error on generation, or something like that
reply
gavmor
7 hours ago
[-]
Hm, the page generated seems inconsistent with the usage of the original link.
reply
anthk
4 hours ago
[-]
This is what every LLM will converge into without curated human input.
reply
JLemay
2 hours ago
[-]
this is excellent haha
reply
dmje
8 hours ago
[-]
I LOVE IT. Superb.
reply
mmooss
4 hours ago
[-]
As I said in another comment, this is brilliant. Suggestion: Remove anything that isn't part of the satire; act always as if it's a 'real' encyclopedia. For example on the front page I would remove,

> Articles are generated on demand and stored permanently upon first request.

Don't dispell the magic; don't pull back the curtain and let people see the mechanics.

EDIT: As you say in your system prompt, "You never wink at the reader. You never acknowledge that anything is funny or fictional. Everything is reported as though it is completely normal and well-documented"

https://news.ycombinator.com/item?id=48042306

reply
Noumenon72
3 hours ago
[-]
This is irresponsible for people who don't get it, takes away confirmation for people who do get it, and makes me block/blacklist any liar who does it.
reply
mmooss
3 hours ago
[-]
It is indeed a problem for people who refuse to use their sense of humor.
reply
FergusArgyll
8 hours ago
[-]
Who says llms can't be funny?!
reply
jijilao
7 hours ago
[-]
wtf, I thought these were just anecdotes until I saw they were actually happening in Astoria. I used to visit in the summers and never heard about any of that! Stop the fake news
reply
tukunjil
7 hours ago
[-]
All the world are going mad with artificial intelligence and LLMs. Just disgusting!
reply
Falimonda
1 hour ago
[-]
reply
jagged-chisel
1 hour ago
[-]
Allow me.

You can name an article anything you want, and the thing will generate content, though not necessarily relevant to the title you chose.

So some vandal comes along and supplies a hateful title, et voila.

reply
Falimonda
5 minutes ago
[-]
Well then this seems like the dumbest site ever...
reply
jordanpg
4 hours ago
[-]
reply
adolfhitler0
3 hours ago
[-]
hello
reply
ivanvoid
2 hours ago
[-]
kinda cool but kinda lame, no overall consistency over articles
reply