At what level of deep context engineering does AI output become human-crafted?
1 points
2 hours ago
| 0 comments
| HN
I’ve been wrestling with a philosophical and ethical question regarding authorship in the age of LLMs, and I’m curious where the HN community draws the line.

Suppose you spend months deeply researching a niche topic. You make your own discoveries, structure your own insights, and feed all of this tightly curated, highly specific context into an LLM. You essentially build a custom knowledge base and train the model on your exact mental framework.

When you finally want to write a post or comment sharing your findings, you outline your specific thoughts and use that meticulously primed LLM to structure and generate the final prose.

My questions for you:

Is it unethical to post this without an "AI-generated" disclaimer? 2. Whose knowledge is actually being showcased? The LLM is generating the syntax, but the semantics, the insights, and the deep context are 100% human-sourced.

Is this fundamentally different from using a ghostwriter, an editor, or a highly advanced compiler? If I am doing the heavy lifting of context engineering and knowledge discovery, it feels restrictive to say I shouldn't utilize an LLM to structure the final output. Yet, the internet still largely views any AI-generated text as inherently "un-human" or low-effort.

Where does human insight end and AI generation begin? If the core ideas are yours, is the medium of the text really the message?

No one has commented on this post.