AI's real superpower: consuming, not creating
179 points
12 hours ago
| 40 comments
| msanroman.io
| HN
wyre
2 hours ago
[-]
I think it's ability to consume information is one of the scarier aspects of AI. NSA, other government, and multi-national corporations have years of our individual browsing and consumption patterns. What happens when AI is analyzing all of that information exponentially faster than any human code and communicating with relevant parties for their own benefit, to predict or manipulate behavior, build psychological profiles, identify vulnerabilities, etc.

It's incredibly amusing to me reading some people's comments here critical of AI, that if you didn't know any better, might make you think that AI is a worthless technology.

reply
concinds
45 minutes ago
[-]
For decades people tried to correlate gait to personality and behavior. Then, DNA, with IQ and all sorts of things. Now they're trying it with barely-noticeable facial features, again with personality traits. But the research is still crap bordering on woo, and barely predictive at all.

It's at least plausible that we are sufficiently complex that, even with tons of NSA and corporate data and extremely sophisticated models, you still wouldn't be able to predict someone's behavior with much accuracy.

reply
nowittyusername
8 minutes ago
[-]
There doesn't need to be a correlation between some data structure and its effects for people to implement some sort of feature. There only need to be enough stupid people in powerful positions that believe in some sort of correlational trend AND also for the data gathering task to be trivially cheap enough for them to implement said things. And there's no shortage of that going around. That's why these technologies are dangerous. Stupid people with powerful and cheap tools to weald them. Kind of like what we saw with the first wave of Facebook algorithms being used against its users to maximize the attention at the detriment of everything else.
reply
macNchz
2 hours ago
[-]
All hype and thought experiments about superintelligence and open questions about creativity and learning and IP aside, this is the area that gives me the biggest pause.

We've effectively created a panopticon in recent years—there are cameras absolutely everywhere. Despite that, though, the effort to actually do something with all of those feeds has provided a sort of natural barrier to overreach: it'd be effectively impossible to have people constantly watching all of the millions of camera feeds available in a modern city and flagging things, but AI certainly could.

Right now the compute for that is a barrier, but it would surprise me if we don't see cameras (which currently offer a variety of fairly basic computer vision "AI" alerting features for motion and object detection) coming with free-text prompts to trigger alerts. "Alert me if you see a red Nissan drive past the house.", "Alert me if you see a neighbor letting his dog poop in my yard.", "Alert the police if you see crime taking place [default on, opt out required]."

reply
nostrademons
1 hour ago
[-]
The prompt becomes the bottleneck, along with the precision of the AI. You can only tell it to do what you know how to express. That makes it useless for preventing new and different types of crimes (or dissidents) but fairly effective for preventing the known types of crimes (or dissent) at scale.
reply
macNchz
20 minutes ago
[-]
I think it's a significantly lower barrier than employing people to watch feeds non stop or review every instance of motion or person-detection. It would be pretty straightforward for the camera maker to test and evaluate a handful of presets that ship with the cameras, and the current state of vision models is already pretty excellent at identifying things in a nuanced and flexible way in images and video.
reply
robotresearcher
1 hour ago
[-]
You can ask it to report anything that looks different recently. That'll catch some new things without needing to understand them in advance.
reply
nostrademons
28 minutes ago
[-]
You'll get inundated with outliers then. Everything looks different if you don't specify the baseline and tolerance.
reply
Liquix
2 hours ago
[-]
> What happens when AI is analyzing all of that information...

They run simulations against N million personality models, accurately predicting the outcome of any news story/event/stimulus. They use this power to shape national and global events to their own ends. This is what privacy and digital sovereignty advocates have been warning the public about for over a decade, to no avail.

reply
seg_lol
1 hour ago
[-]
This is much worse than overt authoritarian control, because the controlled aren't even aware of it.
reply
wordpad
33 minutes ago
[-]
I don't know how viable it is. Even for AI, there are just too many intermingled variables when it comes to human behavior.

All the money in the world has been invested into trying to do it with stock markets, and they still can't do better than average.

reply
jonahrd
1 hour ago
[-]
this became extremely apparent for me watching Adam Curtis's "Russia 1985-1999: TraumaZone" series. The series documents what it was like to live in the USSR during the fall of communism and (cheekily added) democracy. It was released in Oct 2022, meaning it was written and edited just before the AI curve really hit hard.

But so much of the takeaway is that it's "impossible" for top-down government to actually process all of what was happening within the system they created, and to respond appropriately and timely-- thus creating problems like food shortages, corrupt industries, etc etc. So many of the problems were traced to the monolith information processing buildings owned by the state.

But honestly.. with modern LLMs all the way up the chain? I could envision a system like this working much more smoothly (while still being incredibly invasive and eroding most people's fundamental rights). And without massive food and labour shortages, where would the energy for change come from?

reply
delaminator
1 hour ago
[-]
What you're describing is called The Fourth Industrial Revolution in Klaus Schwab's book.

Factory machines transmitting their current rate of production all the way up to International Govt. which, being all knowing, can help you regulate your production based on current and forecasted worldwide consumption.

And your machines being flexible enough to reconfigure to produce something else.

Stores doing the same on their sales and Central Bank Digital Currency tying it all together.

reply
wongarsu
1 hour ago
[-]
A planned economy is certainly a lot more viable now than it was in 1950, let alone 1920. The Soviet Union was in many ways just a century too early.

But a major failing of the Soviet economic system was that there simply wasn't good data to make decisions, because at every layer people had the means and incentive to make their data look better than it really was. If you just add AI and modern technology to the system they had it still wouldn't work because wrong data leads us to the wrong conclusions. The real game changer would be industrial IoT, comprehensive tracking with QR codes, etc. And even then you'd have to do a lot of work to make sure factories don't mislabel their goods

reply
hylaride
1 hour ago
[-]
> A planned economy is certainly a lot more viable now than it was in 1950, let alone 1920. The Soviet Union was in many ways just a century too early.

If the economy were otherwise stagnant, maybe. But top-down planning just cannot take into account all the multitudes of inputs to plan anywhere near the scale that communist countries did. Bureaucrats are never going to be incentivized anywhere near the level that private decision making can be. Businesses (within a legal/regulatory framework) can "just do" things if they make economic sense via a relatively simple price signal. A top-down planner can never fully take that into account, and governments should only intervene in specific national interest situations (eg in a shortage environment legally mandating an important precursor medicine ingredient to medical companies instead of other uses).

The Soviet Union decided that defence was priority number one and shoved an enormous amount of national resources into it. In the west, the US government encouraged development that also spilled over into the civilian sector and vice-versa.

> But a major failing of the Soviet economic system was that there simply wasn't good data to make decisions, because at every layer people had the means and incentive to make their data look better than it really was.

It wasn't just data that was the problem, but also quality control, having to plan far, far ahead due to bureaucracy in the supply chain, not being able get spare parts because wear and tear wasn't properly planned, etc. There's an old saying even in private business that if you create and measure people on a metric they'll game or over concentrate on said metric. The USSR often pumped out large numbers of various widgets, but quality would often be poor (the stories of submarine and nuclear power plant manufacturers having to repeatedly deal and replace bad inputs was a massive source of waste).

reply
seg_lol
1 hour ago
[-]
Jimmy Carr (comedian) https://www.youtube.com/watch?v=jaYOskvlq18 thinks that AIs ability to be a surveillance savant is one of the biggest risks that people aren't thinking enough about.
reply
alexgotoi
4 hours ago
[-]
Models are mediocre solo consumers: they skim, paraphrase and confidently miss the one subtle thing that actually matters. Humans are still better at deciding which three paragraphs in a 40‑page spec are load‑bearing. But as soon as you treat the model as a stochastic code monkey with a compiler, test suite, linter and some static tooling strapped to its back, it suddenly looks a lot more like “creation with a very fast feedback loop” than “consumption at scale”.

The interesting leverage isn’t that AI can read more stuff than you; it’s that you can cheaply instrument your system (tests, properties, contracts, little spec fragments) and then let the model grind through iterations until something passes all of that. That just shifts the hard work back where it’s always been: choosing what to assert about the world. The tokens and the code are the easy part now.

This might make it into this week's https://hackernewsai.com/ newsletter.

reply
_DeadFred_
3 hours ago
[-]
Don't forget guardrails and other tweaks you don't know are being applied. I was exploring energy usage and when I reached solar energy somehow the AI decided it was political and switched to useless mode until I explicitly told it to look back at the conversation and it's context and that I wasn't trying to get it to say solar was good and it was OK if numbers made solar look good. I was really weird.
reply
mistrial9
2 hours ago
[-]
.. except you are completely wrong in a certain way.. AFAIK Google has been reading and indexing patent applications and maybe SEC filings since before word2vec. Certain niches absolutely are reading documents faster than your attorneys can...
reply
bdbdbdb
8 hours ago
[-]
> No human could read all of this in a lifetime. AI consumes it in seconds.

And therefore it's impossible to test the accuracy if it's consuming your own data. AI can hallucinate on any data you feed it, and it's been proven that it doesn't summarize, but rather abridges and abbreviates data.

In the authors example

> "What patterns emerge from my last 50 one-on-ones?" AI found that performance issues always preceded tool complaints by 2-3 weeks. I'd never connected those dots.

Maybe that's a pattern from 50 one-on-ones. Or maybe it's only in the first two and the last one.

I'd be wary of using AI to summarize like this and expecting accurate insights

reply
gchamonlive
8 hours ago
[-]
> it's been proven that it doesn't summarize, but rather abridges and abbreviates data

Do you have more resources on that? I'd love to read about the methodology.

> And therefore it's impossible to test the accuracy if it's consuming your own data.

Isn't it only if it's hard to verify the result? If it's a result that's hard to produce but easy to verify, a class which many problems fall into, you'd just need to look at the synthetized results.

If you ask it "given these arbitrary metrics, what is the best business plan for my company?" It'd be really hard to verify the result. I'd be hard to verify the result from anyone for that matter, even specialists.

So I think it's less about expecting the LLM to do autonomous work and more about using LLMs to more efficiently help you search the latent space for interesting correlations, so that you and not the LLM come up with the insights.

reply
missedthecue
1 hour ago
[-]
"AI can hallucinate on any data you feed it, and it's been proven that it doesn't summarize, but rather abridges and abbreviates data."

Have you ever met a human? I think one of the biggest reasons people become bearish on AI is that their measure of whether it's good/useful is that it needs to be absolutely perfect, rather than simply superior to human effort.

reply
bigstrat2003
1 hour ago
[-]
Right now AI is inferior, not superior, to human effort. That's precisely why people are bearish on it.
reply
missedthecue
59 minutes ago
[-]
I don't think thats obvious. In 20 minutes for example, deep research can write a report on a given topic much better than an analyst can produce in a day or two. It's literally cheaper, better, and faster than human effort.
reply
jrflowers
18 minutes ago
[-]
What do you man by “better” in this context?
reply
novok
1 hour ago
[-]
AI is a new kind of bulk tool, you need to know how to use it well and context management is a huge part of it. For that 1-1 example, you would do a for loop with new context with subagents or a literal for loop for example to prevent the 'first two and last one' issue. Then with those 1-1 summaries, look at that to make the determination for example.

Humanity has gotten amazing results from unreliable stochastic processes, managing humans in organizations is an example of that. It's ok if something new is not completely deterministic to still be incredibly useful.

reply
kenjackson
8 hours ago
[-]
Similar to P/NP, verification can often be faster than solving. For example, you can then ask the AI to give you the list of tool complaints and the performance issues. Then a text search can easily validate the claim.
reply
TimByte
3 hours ago
[-]
I think as long as you keep a skeptical loop and force the model to cite or surface raw notes, it can still be useful without being blindly trusted
reply
potsandpans
4 hours ago
[-]
> ...and it's been proven that it doesn't summarize, but rather abridges and abbreviates data.

I don't really know what this means, or if the distinction is meaningful for the majority of cases.

reply
xtiansimon
7 hours ago
[-]
> “I'd be wary of using AI to summarize like this and expecting accurate insights.”

Sure, but when do you have accurate results when using an iterative process? It can happen at the beginning or at the end when you’re bored, or have exhausted your powers of interrogation. Nevertheless, your reasoning will tell you if the AI result is good, great, acceptable, or trash.

For example, you can ask Chat—Summarize all 50 with names, dates and 2-3 sentence summaries and 2-3 pull quotes. Which can be sufficient to jog your memory, and therefore validate or invalidate the Chat conclusion.

That’s the tool, and its accuracy is still TBD. I for one am not ready to blindly trust our AI overlords, but darn if a talking dog isn’t worth my time if it can make an argument with me.

reply
block_dagger
8 hours ago
[-]
Your colleagues using the tech will be far ahead of you soon, if they aren’t already.
reply
rsynnott
7 hours ago
[-]
... I mean, what tools one is supposed to be using, according to the advocates, seems to completely change every six months (in particular, the goto excuse when it doesn't work well is "oh, you used foo You should have used bar which came out three weeks ago!", so I'm not sure that _experience_ is particularly valuable if these things ever turn out to be particularly useful.
reply
iLoveOncall
8 hours ago
[-]
Far ahead in producing bugs, far ahead in losing their skills, far ahead in becoming irrelevant, far ahead in being unable to think critically, that's absolutely right.
reply
pitched
7 hours ago
[-]
The new tools have sets of problems they are very good at, sets they are very bad at and they are generally mediocre at everything else. Learning those lessons isn’t easy, takes time, and will produce bugs. If you aren’t making those mistakes now with everyone else, you’ll be doing them later when you do decide to start catching up and it will be more noticeable then.
reply
SoftTalker
3 hours ago
[-]
Disagree. For the tools to become really useful (and fulfill the expectations of the people funding them) they will need to produce good results without demanding years of experience understanding their foibles and shortcomings.
reply
_DeadFred_
3 hours ago
[-]
And all of those things (good at, bad at, the lessons learned on current models current implementation) can change arbitrarily with model changes, nudges, guardrails, etc. Not sure that outsourcing your skillset on the current foundation of sand is long term smart, even if it's great for a couple of months.

It may be those un-learning the previous iteration interactions once something stable arrives that are at a disadvantage?

reply
evilduck
1 hour ago
[-]
Why would the AI skeptics and curmudgeons today not continue to dismiss the "something stable" in the future?
reply
afandian
8 hours ago
[-]
"The market can stay irrational longer than you can stay solvent" feels relevant here.
reply
purplehat_
9 hours ago
[-]
I often see things like this and get a little bit of FOMO because I'd love to see what I can get out of this but I'm just not willing to upload all these private documents of mine to other people's computers where they're likely to be stored for training or advertising purposes.

How are you guys dealing with this risk? I'm sure on this site nobody is naive to the potential harms of tech, but if you're able to articulate how you've figured out that the risk is worth the benefits to you I'd love to hear it. I don't think I'm being to cynical to wait for either local LLMs to get good or for me to be able to afford expensive GPUs for current local LLMs, but maybe I should be time-discounting a bit harder?

I'm happy to elaborate on why I find it dangerous, too, if this is too vague. Just really would like to have a more nuanced opinion here.

reply
jwr
7 hours ago
[-]
> I'm just not willing to upload all these private documents of mine to other people's computers where they're likely to be stored for training or advertising purposes.

And rightfully so. I've been looking at local LLMs because of that and they are slowly getting there. They will not be as "smart" as the big models, but even a 30B model (which you can easily run on a modern Macbook!) can do some summarization.

I just hope software for this will start getting better, because at the moment there is a plethora of apps, none of which are easy to use or even work with a larger number of documents.

reply
GeneralMaximus
2 hours ago
[-]
I've been analyzing my Obsidian vault using local LLMs that I run via Apple's mlx_lm. I'm on an M4 MacBook Pro with 48GB RAM.

The results are ... okay. The biggest problem is that I can't run some of the largest models on my hardware. The ones I'm running (mostly Qwen 3 at different numbers of parameters and quantization levels) often produce hallucinations. Overall, I can't say this is a practical or useful setup, but I'm just playing around so I don't mind.

That said, I doubt SOTA models would be that much better at this task. IMO LLM generated summaries and insights are never very good or useful. They're fine for assessing whether a particular text is worth reading, but they often extract the wrong information, or miss some critical information, or over-focus on one specific part of the text.

reply
ben_w
9 hours ago
[-]
The docs I upload are ones I'd be OK getting leaked. That also includes code. Even more broadly, it also includes whatever pics I put onto social media, including chat groups like Telegram.

This does mean that, useful as e.g. Claude Code is, for any business with NDA-type obligations, I don't think I could recommend it over a locally hosted model, even though the machine needed to run a decent local model might cost €10k (with current price increases due to demand exceeding supply), that the machine is still slower than what hosts the hosted models, that the rapid rate of improvement means a 3-month delay between SOTA in open-weights and private-weights is enough to matter*.

But until then? If I'm vibe coding a video game I'd give away for free anyway, or copy-editing a blog post that's public anyway, or using it to help with some short stories that I'd never be able to charge money for, or uploading pictures of the plants in my garden right by the public road… that's fine.

* When the music (money for training) stops, it could be just about any provider whose model is best, whatever that is is likely to still get distilled down fairly cheaply and/or some 3-month-old open-weights model is likely to get fine-tuned for each task fairly cheaply; independently of this, without the hyper-scalers the supply chains may shift back from DCs to PCs and make local models much more affordable.

reply
JonChesterfield
7 hours ago
[-]
> The docs I upload are ones I'd be OK getting leaked. That also includes code.

That's fortunate as uploading them to a LLM was you leaking them.

reply
ben_w
3 hours ago
[-]
"Leaking" is an unauthorised third party getting data; for any cloud data processor, data that is sent to that provider by me (OpenAI, everything stored on Google Docs, all of it), is just a counterparty, not a third party.

And it has to be unauthorised, e.g. the New York Times getting to see my ChatGPT history isn't itself a leak because that's court-ordered and hence authorised, all the >1200 "trusted partners" in GDPR popups if you give consent that's authorised, etc.

reply
empiko
8 hours ago
[-]
I don't really buy this post. LLMs are still pretty weak at long contexts and asking them to find some patterns in data usually leads to very superficial results.
reply
embedding-shape
8 hours ago
[-]
No one said you cannot run LLMs with the same task more than once. For my local tooling, I usually use the process of "Do X with previously accumulated results, add new results if they come up, otherwise reply with just Y" and then you put that into a loop until LLM signals it's done. Software-wise, you could add so it continues beyond that too, for extra assurance.

In general for chat platforms you're right though, uploading/copy-pasting long documents and asking the LLM to find not one, but multiple needles in a haystack tend to give you really poor results. You need a workflow/process for getting accuracy for those sort of tasks.

reply
dns_snek
8 hours ago
[-]
> and then you put that into a loop until LLM signals it's done

And after that? What's next?

reply
jennyholzer2
8 hours ago
[-]
no one said you can't turn on the radio and start listening to static
reply
embedding-shape
8 hours ago
[-]
Sure. Is there a point you're trying to make by saying this? I'm afraid your comment is so succinct it isn't obvious what you are trying to say.
reply
jennyholzer2
8 hours ago
[-]
Ask the LLM if it understands what I'm trying to say
reply
ashirviskas
7 hours ago
[-]
It really depends on how deep you want to go. And this will likely not be useful in any way, other than a new hobby. Me and my friends who do this kind of thing, we do it for fun.

If it was not fun for me, I would not have bought 3 GPUs just to run better local LLMs. Actual time, effort and money spent on my local setup compared to the value I get does not justify it at all. For 99% of the things I do I could have just used an API and paid like $17 in total. Though it would not have been as fun. For the other 1% I could have just rented some machine in cloud and ran LLMs there.

If you don't have your private crypto keys in your notes worth millions, but still worry about your privacy, I'd recommend just renting a machine/GPU in a smaller cloud provider (not the big 3 or 5) and do these kind of things there.

reply
jennyholzer2
8 hours ago
[-]
OP's post is trying to con you into using a LLMs for a task that they do not perform well at.

This is specific, but if you start replying to LLM summaries of emails, instead of reading and responding to the content of the email itself, you are quickly going to become a burden socially.

The people you are responding to __will__ be able to tell, and will dislike you for your lack of consideration.

reply
roadside_picnic
1 hour ago
[-]
AI's real super power is telling you what you want to hear (doubly true since RLHF became the standard).

You can really see the limitations of LLMs when you look at how poorly they do at summarization. They most often just extract a few key quotes from the text, and provide an abbreviated version of the original text (often missing key parts!)

Abbreviation is not summarization. To properly summarized you need to be able to understand higher level abstractions implied in the text. At a fundamental level this is not what LLMs are designed to do. They can interpolate and continue existing text in remarkable and powerful ways, but they aren't capable of getting the "big picture". This is likely related to why they frequently ignore very important passages when "summarizing".

> We're still thinking about AI like it's 2023.

Just a reminder that in 2023 we were all told that AI was on a path of exponential progress. Were this true, you wouldn't need to argue that we're using it "wrong" because the technology would have improved dramatically more than it did from 2021-2023 such that there would be no need to argue that its better, using it "wrong" would still be a massive improvement.

reply
rikroots
1 hour ago
[-]
I'm wary about using AI models to generate stuff for me - I still bristle from the time a model told me that "JS Sets are faster than Arrays" and I believed it, until I discovered that it forgot to add the important piece of information: for Arrays containing tens of thousands of elements. Which made me feel stupid.

Still, I find the models to be excellent synthesisers of vast quantities of data on subjects in which I have minimal prior knowledge. For instance, when I wanted to translate some Lorca and Cavafy poems into English I discovered that ChatGPT had excellent knowledge of the poems in their native languages, and the difficulties translators faced when rendering them into English. Once I was able to harness the models to assist me translate a poem, rather than generate a translation for me (every LLM is convinced it's a Poet), I managed to write some reasonable poems that met my personal requirements.

I wrote about the experience here: https://rikverse2020.rikweb.org.uk/blog/adventures-in-poetry...

reply
skydhash
9 hours ago
[-]
The article is more about offloading your thinking to the machine than a real usage of what notes is. You may as well make every decision rely on a coin toss.

I take notes for remembrance and relevance (what is interesting for me). But linking concepts is all my thinking. Doing whatever rhe article is prescribing is like sending someone on a tourist trip to take pictures and then bragging that you visited the country. While knowing that some pictures are photoshopped.

reply
bzmrgonz
9 hours ago
[-]
I disagree, what ai brings to thr table is instant and total recall of our thoughts/notes/experiences. Deep analysis of that vast data storage is only possible via ai, which should trigger the aha!!! Moment, or the "you're crazy ai" moment. Either way, it's very useful. And we haven't even talked about the knowl she we have collecting digital dust in emails, and notes and reports of past employees.
reply
skydhash
7 hours ago
[-]
There's a lot more dimension to the notes we take than what is actually written down. You can share your notes to other people and their interpretation would be very different to what you intended. It even happens with full books and articles. Even lots of metadata don't really help.

Text is a very linear medium. It's just the spark while our wealth of experiences is the fuel. No amount of wrangling the word "pain" will compare to actually experiencing it.

You'll better be served by just having a space repetition system for the notes you've taken. In this way, you'll be reminded of the whole experience when you took the note instead of reading words that were never written by someone who have lived.

reply
ben_w
7 hours ago
[-]
AI at its best can be improvement on human recall, but even at its best it is not

> instant and total recall of our thoughts/notes/experiences

Closest is with vector searches & RAG etc., but even that isn't total recall because it will misclassify stuff with current SOTA.

Throwing everything in a pile and hoping an LLM will sort it all out for you, is at present even more limited.

They're good, sure, but you're overstating them.

reply
rsynnott
7 hours ago
[-]
Hrm. Pretty much every LLM summary I've seen of a document I've read, or a meeting I've attended, at absolute best misses important details and overemphasises trivia, and often just flat-out makes stuff up. I'm not sure I want my own _thoughts_ filtered through that.
reply
R_D_Olivaw
7 hours ago
[-]
Have you tried notebooklm at all? I'm actually pretty astonished by it's audio overviews and the insights/summaries it's able to do. Of course, with the right prompt massaging. But it's actually really really well done.
reply
coliveira
7 hours ago
[-]
How is that different from using selecting a random portion of the archive? If all you want is a sudden remembrance that you maybe have forgotten, you don't need an AI to do that. It's just another complete irrelevant use of AI.
reply
kitd
9 hours ago
[-]
At least half of AI's "superpower" in OP's case is the fact that he has everything in Obsidian already. With all of that background context, any tool becomes super valuable in evaluating & guiding future actions.

Still, all credit to him for creating that asset in the first place.

reply
SoftTalker
3 hours ago
[-]
Pretty much my reaction too. I'd guess maybe 1 in 100 people are that diligent about journaling. Good for him that he is getting a payoff for his effort.
reply
vignesh-prasad
1 hour ago
[-]
This is a really cool insight. I'm going to try this in my Obsidian vault as well. What are some of the highest leverage items to add to your vault to start with?

I think meetings is one thing I'm missing out on. How do you put meeting information into your Obsidian? Is it just transcripts?

reply
acituan
3 hours ago
[-]
We know from the era of data the power of JOIN. Bring in two different data sources about a thing and you could produce an insight neither of them could have provided alone.

LLMs can be thought as one big stochastic JOIN. The new insight capabilities - thanks to their massive recall - is there. The problem is the stochasticity. They can retrieve stuff from the depths and slap them together but in these use cases we have no clue how relevant their inner ranking results or intermediary representations were. Even with the best read of user intent they can only simulate relevance, not really compute it in a grounded and groundable way.

So I take such automatic insight generation tasks with a massive grain of salt. Their simulation is amusing and feels relevant but so does a fortune teller doing a mostly cold read with some facts sprinkled in.

> → I solve problems faster by finding similar past situations → I make better decisions by accessing forgotten context → I see patterns that were invisible when scattered across time

All of which makes me skeptical of this claim. I have no doubt they feel productive but it might just as well be a part of that simulation, with all the biases, blind spots etc originating from the machine. Which could be worse than not having used the tool. Not having augmented recall is OK, forgetting things are OK - because memory is not a passive reservoir of data but an active reranker of relevance.

LLMs can’t be the final source of insight and wisdom, they are at best sophists, or as Terrence Tao put it more kindly, a mere source of cleverness. In this, they can just as well augment our self-deception capacity, maybe even more than counterbalancing them.

Exercise: whatever amusing insight a machine produces for you, ask for a very strong counter to it. You might be equally amused.

reply
ericlamb89
8 hours ago
[-]
Agree with OP that LLMs are a great tool for this use case. It's made possible because OP diligently created useful input data. Unfortunately OP's conclusion goes against the AI hype machine. If "consuming" is the "superpower" of AI, then the current level of investment/attention would not be justified.
reply
impendia
9 hours ago
[-]
I was in a research math lecture the other day, and the speaker used some obscure technical terminology I didn't know. So I dug out my phone and googled it.

The AI summary at the top was surprisingly good! Of course, the AI isn't doing anything original; instead, it created a summary of whatever written material is already out there. Which is exactly what I wanted.

reply
Arisaka1
9 hours ago
[-]
My counterpoint to this is, if someone cannot verify the validity of the summary then is it truly a summary? And what would the end result be if the vast majority of people opted to adopt or deny a position based on the summary written by a third party?

This isn't strictly a case against AI, just a case that we have a contradiction on the definition of "well informed". We value over-consumption, to the point where we see learning 3 things in 5 minutes as better than learning 1 thing in 5 minutes, even if that means being fully unable to defend or counterpoint what we just read.

I'm speficially referring to what you said: "the speaker used some obscure technical terminology I didn't know" this is due to lack of assumed background knowledge, which makes it hard to verify a summary on your own.

reply
gosub100
8 hours ago
[-]
At least with pre-AI search, the info is provided with a source. So there is a small level of reputation that can be considered. With AI, it's a black box that someone decides what to train it on, and as someone said elsewhere, there's no way to police its sources. To get the best results, you have to turn it loose on everything.

So someone who wants a war or wants Tweedledum to get more votes than Tweedledee has incentives to poison the well and disseminate fake content that makes it into the training set. Then there's a whole department of "safety" that has to manually untrain it to not be politically incorrect, racist etc. Because the whole thesis is don't think for yourself, let the AI think for you.

reply
lazide
9 hours ago
[-]
The issue is even deeper - the 1 thing in 5 minutes was probably already surface knowledge. We don’t usually really ‘know’ the thing that quickly. But we might have a chance.

The 3 things in 5 minutes is even worse - it’s like taking Google Maps everywhere without even thinking about how to get from point A to point B - the odds of knowing anything at all from that are near zero.

And since it summarizes the original content, it’s an even bigger issue - we never even have contact with the thing we’re putatively learning from, so it’s even harder to tell bullshit from reality.

It’s like we never even drove the directions Google Maps was giving us.

We’re going to end up with a huge number of extremely disconnected and useless people, who all absolutely insist they know things and can do stuff. :s

reply
FridayoLeary
8 hours ago
[-]
I have to agree. People moan that the ai summary is rubbish but that misses the point. If i need a quick overview of a subject i don't necessarily need anything more then a low quality summary. It's easier then wading through a bunch of blogs of unknown quality.
reply
rsynnott
7 hours ago
[-]
> If i need a quick overview of a subject i don't necessarily need anything more then a low quality summary

It's true. I previously had no idea of the proper number of rocks to eat, but thanks to a notorious summary (https://www.bbc.com/news/articles/cd11gzejgz4o) I have all the rock-eating knowledge I need.

reply
jennyholzer2
8 hours ago
[-]
In my experience Google's AI summaries are consistently accurate when retrieving technical information. In particular, documentation for long-lived, infrequently changing software packages tends to be accurate.

If you ask Google about news, world history, pop culture, current events, places of interest, etc., it will lie to you frequently and confidently. In these cases, the "low quality summary" is very often a completely idiotic and inane fabrication.

reply
sam_goody
9 hours ago
[-]
I have a counterpoint from yesterday.

I looked up a medical term, that is frequently misused (eg. "retarded"), and asked the Gemini to compare it with similar conditions.

Because I have enough of a background in the subject matter, I could tell what it had construed by its mixing the many incorrect references with the much fewer correct references in the training data.

I asked it for sources, and it failed to provide anything useful. But once I am looking at sources, I would be MUCH better off searching and only reading the sources might actually be useful.

I was sitting with a medical professional at the time (who is not also a programmer) and he completely swallowed what Gemini was feeding him. He commented that he appreciates that these summaries let him know when he is not up to date with the latest advances, and he learnt alot from the response.

As an aside, I am not sure I appreciate that Google's profile would now associate me with that particular condition.

Scary!

reply
kenjackson
8 hours ago
[-]
This is just garbage in, garbage out. Would you better off if I gave you an incorrect source? What about three incorrect ones? And a search engine would also associate you with this term now. Nothing you describe here seems specific to AU.
reply
apothegm
7 hours ago
[-]
The issue is how terrible the LLM is at determining which sources are relevant. Whereas a somewhat informed human can be excellent at it. And unfortunately, the way search engines work these days, a more specific search query is often unable to filter out the bad results. And it’s worst for terms that have multiple meanings within a single field.
reply
kenjackson
3 hours ago
[-]
That word "somewhat" in "somewhat informed" is doing a lot of lifting here. That said, I do think that having a little curation in the training data probably would help. Get rid of the worst content farms and misinformation sites. But it'll never be perfect, in the same way that getting any content in the world today isn't perfect (and never has been).
reply
solumunus
8 hours ago
[-]
Try the same with Perplexity?
reply
ge96
4 hours ago
[-]
Wrt temperature/randomization is it not possible for it to create something genuine. Even life there seems to always be some inspiration for things being made. How did Tesla go from Brushed to Brushless AC motors. There was some foundational knowledge of electricity similar to airplanes, the Wright Brothers their airplane seems backwards (like a canard) but still a plane/needs wings. Not something radical like ion engines for lift (much harder).
reply
neom
9 hours ago
[-]
A guy I work with has been doing this, I watched his tutorial and it was all a bit... overwhelming for me (to think about using such a system), I'm still on pen and paper, heh. Nevertheless - here is his template: https://github.com/kmikeym/obsidian-claude-starter and tutorial: https://www.youtube.com/watch?v=1U32hZYxfcY
reply
jennyholzer2
8 hours ago
[-]
he'd probably get better productivity results if he started smoking crack
reply
kmikeym
2 hours ago
[-]
i happen to know he takes vyvanse, which is kind of the same thing
reply
mettamage
9 hours ago
[-]
Sorry, is this new? Providing the right data to LLMs supercharges them. Yes, I agree. I've been doing this since March 2025 when there was a blog post about using MCP on HN. I'm not the only one who's doing that.

I've written my whole lifestory, the parts I'm willing to share that is, and posted it in Claude. It helped me way better with all kinds of things. It took me 2 days to write without formatting, pretty much how I write all my HN comments (but then 2 days straight: eat, sleep, write).

I've also exported all my notes, but it's too big for the context. That's why I wrote my life story.

From a practical standpoint I think the focus is on context management. Obsidian can help with this (I haven't used it so don't know the details). For code, it means doing things like static and dynamic analysis to see which functions calls what and create a topology of function calls and send that as context, then Claude Code can more easily know what to edit, and it doesn't need to read all the code.

reply
TN1ck
9 hours ago
[-]
Curious, what did you get out of it? Counseling? Some action plan? A reflection? Seems intriguing to do, but would like to know how it helped you exactly if you don’t mind sharing.
reply
mettamage
8 hours ago
[-]
Career planning at the moment, tailoring resumes. Currently, it's not tailoring it well enough yet because it's hallucinating too much, so I need to write a specific prompt for that. But I know for work, where I do similar things (text generation with a human in the loop), that I can tackle that problem.

So yea, I definitely, add to the "AI generated" text part but I read over all the texts, and usually they don't get sent out. Ultimately, it's still a lot quicker to do it this way.

For career planning, so far it hasn't beaten my own insights but it came close. For example, it mentioned that I should actually be a developer advocate instead of a software engineer. 2 to 3 years ago I came to that same thought. I ultimately rejected the idea due to how I am but it is a good one to think about.

What I see now, I think the best job for me would be a tech consultant. Or as I'd also like to call it: a data analyst that spots problems and then uses his software engineering or teaching skills to solve that problem. I don't think that job has a good catch all title as it is a pretty generalist job. I'm currently at a company that allows me to do this but the pay is quite low, so I'm looking for a tech company where I could do something similar. Maybe a product manager role? It really depends on the company culture.

What I also noticed it did better: it doesn't reduce me to data engineering anymore. It understands that I aspire to learn everything and anything I can get my hands on. It's my mode of living and Claude understands that.

So nothing too spectacular yet, but it'll come. It requires more prompt/context engineering and fine tuning of certain things. I didn't get around to it yet.

reply
paddleon
7 hours ago
[-]
> What I also noticed it did better: it doesn't reduce me to data engineering anymore. It understands that I aspire to learn everything and anything I can get my hands on. It's my mode of living and Claude understands that.

I'm really glad you are getting some personal growth out of these tools, but I hesitate to give Claude as much credit as you do. And I'm really cautious about saying Claude "understands" because that word has many meanings and it isn't clear which ones apply hear.

What I'm hearing is that you use it like a kind of rubber-duck debugger. Except this is a special rubber duck because it can replay/rephrase what you said.

reply
fsckboy
4 hours ago
[-]
AI's consumption superpower reminds me of birds, flying about eating worms, then flying back to the nest and regurgitating them into baby's mouth, because it's the processed nutrition they provide that valuable; their own consumption is a combination of fractional/temporary
reply
TimByte
3 hours ago
[-]
I think the key difference (and risk) is that with AI we sometimes forget to check what got digested and what got lost along the way
reply
stuffn
3 hours ago
[-]
"AI" is more like a bird that flies around eating worms, sometimes regurgitates the nutrition, and sometimes regurgitates a pound of bolts. Then it apologizes, flies around again, and regurgitates pebbles. It does it again but this time it regurgitates acid that vaguely appears to be the same nutritious substance. If it flies around too much it will lose it's context and begin to think it's not a bird but rather a donkey.

Most of "AI"s superpower is tricking monkeys into anthropomorphizing it. It's just a giant, complicated, expensive, environmentally destructive math computer with no capability to create novel thought. If it did have one superpower it's gaslighting and manipulation.

reply
Lerc
8 hours ago
[-]
I think this will be a significant thing in the future, but right now I think the reasoning abilities are too limited. It can reasonably approximate a vector database where it can find related things, but I think that success can hide the failure to find important things.

I'd like to be able to point a model at a news story and have it follow every fact and claim back to an origin, (or lack of one). I'm not sure when they will be able to do that, they aren't up to the task yet. Reading the news would be so much different if you could separate the 'we report this happened' from the 'we report that someone else reported this happened"

reply
nnnnico
9 hours ago
[-]
What is the approach used? It seems everything gets done in context by plain text searches with some agent like Claude code or is there RAG involved? (was the article written by AI? it has that LinkedIn-groove all over the place)
reply
adidoit
9 hours ago
[-]
Ironically this article/blog itself is giving off an AI-generated smell as it's tone and cadence seem very similar to LinkedIn posts or rather output of prompts to create LinkedIn posts.
reply
sallveburrpi
9 hours ago
[-]
Anyone has a simple setup for this with local LLMs like Mistral that they can share?

I would love to try this out but don’t feel comfortable sharing all my personal notes with a third party.

reply
johndhi
7 hours ago
[-]
Can you share some prompt examples you use to try to ensure it doesn't get "lazy" and just cherry pick from here and there?

I have a written novel draft and something like a million words of draft fiction but have struggled with how to get meaningful analytics from it.

reply
tigranbs
9 hours ago
[-]
I would say the AI consumption aspect was a side effect: the primary goal was to "generate" new stuff. So far, to me, the significant boost is the coding aspect. Still, for the rest of the people, I think you are right: 90% of the benefits come from being an interactive, conversational search on top of the available information that AI can read/consume.
reply
zkmon
7 hours ago
[-]
I don't see what's new here. The biggest enterprise usecase for AI is to "consume" the vast amount of internal wiki pages, process documents, policy manuals, code repos, presentations and be able to answer questions.
reply
ratdragon
10 hours ago
[-]
I do use such an approach and it is actually awesome however only for data I'm sure I don't mind being sold.
reply
bzmrgonz
9 hours ago
[-]
If we pair this with a wearable ai pendant like plaid or limitless, we can increase the amount and frequency of depositing into our knowledge vault. Op, do you type your thoughts and notes or dictate them?
reply
n49o7
4 hours ago
[-]
Compound the gains again by asking AI to write the questions too!
reply
pqs
9 hours ago
[-]
This is the right approach. I exported my 25k Evernote notes to markdown (I'm using Emacs' Howm mode) and I use Codex CLI to ask questions about my notes. It is great and powerful!
reply
monkeydust
9 hours ago
[-]
This is akin to using AI as a 'second brain', just getting started with Obsidian, my main challenge is loading it up with every communication trace I have...but haven't given up.
reply
TimByte
3 hours ago
[-]
This approach feels like a much more honest use
reply
ozgung
8 hours ago
[-]
What is a good way of connecting Obsidian vault to AI?
reply
Ove_K
3 hours ago
[-]
Digital Information Consumers or Digital Information Copiers or Digital Information Creators

Either way, they are D.I.C.s

reply
mattsears
3 hours ago
[-]
Yay, another HN post confidently claiming "everyone’s doing X wrong."

"Everyone’s using AI wrong." Oh, we are? Please, enlightened us thought leader, tell us how we’ve all been doing it wrong this entire time.

"Here’s how most people use AI." No, that’s how you use AI. Can we stop projecting our own habits onto the whole world and calling it insight?

reply
tomgag
9 hours ago
[-]
For fuck's sake, isn't anyone here horrified at how much information on yourself you are willingly funneling into Big Tech with this approach?
reply
ramigb
9 hours ago
[-]
It is scary! my coping mechanism, which I admit is stupid, is to believe no matter what I do as long as I am online they have my data. But you are right most people just give absurd amount of data for willingly.
reply
SoftTalker
3 hours ago
[-]
Not stupid at all. Nothing you do online should be considered private.
reply
sirsau
3 hours ago
[-]
How would this affect my life in any way? Better ads for me? More useful searches? I really don't get this obsession with privacy.
reply
gosub100
8 hours ago
[-]
And the surveillance could be inversely correlated to profitability. If they pour billions into these chat bots and can't monetize them to the revolutionary oracles they touted, one minor consolation is to sell detailed profiles about the people using them. You could probably sort out the less intelligent people based on what they were asking.
reply
exasperaited
2 hours ago
[-]
AI-powered solipsistic reassurance?
reply
G_o_D
8 hours ago
[-]
Thats why proofreading jobs still exist
reply
catigula
3 hours ago
[-]
How many times have the goal-posts shifted now?

Everyone is justifiably afraid of AI because it's pretty obvious that Claude Opus 4.5 level agents replace developers.

reply
mrmansano
2 hours ago
[-]
> it's pretty obvious that Claude Opus 4.5 level agents replace developers.

Is it though? I really don't see it. Replacing developers requires way more than writing the right code. I can agree it can replace junior to mid level engineers at some tasks, specifically in greenfield projects and popular stacks. And, don't get me wrong, it's very helpful even for senior engineers. But to "replace" those it will require some new iterations of "Opus 4.5".

reply
catigula
2 hours ago
[-]
I'm going to be honest because I have nothing to lose; I've never in my entire life met a single senior engineer that can surpass Opus generally. Some can surpass it at problem solving but they aren't common.
reply
emp17344
3 hours ago
[-]
This isn’t goalpost shifting - everyone is trying to figure out what AI is good at. It’s certainly not the panacea many here make it out to be.
reply
catigula
3 hours ago
[-]
We know what it's good at - software engineering.
reply
ramigb
9 hours ago
[-]
I found that out while working with music models like Suno! I love creating music for my own listening experience as a hobbyist and when I give suno a prompt no matter how well crafted it is the outcome varies from "meh" to "that's good" ... while when I upload semi finished beat I made and prompt it to cover it the results consistently leave me speechless! Could be a bias since the music has a lot of elements I created but this workflow is similar across other generative models for me.
reply
fuzzfactor
8 hours ago
[-]
>real superpower: consuming, not creating

Well for most humans that's the more super of the powers too ;)

reply
achenet
7 hours ago
[-]
So I decided to give this a try - I have a `writing` directory/git repo where I store most of my writing, which can be anything from notes on technical subjects (list of useful syntax for a given programming language), to letters to friends, to philosophical ramblings.

I opened Claude Code in the repo and asked it to tell me about myself based on my writing.

Claude's answer overestimated my technical skills (I take notes on stuff I don't know, not on things I know, so it assumed that I had deep expertise in things I'm currently learning, and ignored areas where I do have a fair amount of experience), but the personal side really resonated with me.

reply
heliumtera
8 hours ago
[-]
Not really surprising that a tool created for surveillance and mass profiling turned out to be pretty good at surveiling and profiling
reply
embedding-shape
8 hours ago
[-]
Is this like your belief? Transformers et al were invented by researchers with the explicit goal of surveillance and mass profiling? You think maybe that could have been an unintended effect of something/someone else? Or it's all the researchers fault?
reply