But not just AI bots or interfaces. Everything is saved and never deleted.
Remember Facebook? "We will never delete anything" that is their business.
So anything that you put on those "services" is gone out of your hands. But we still have an option, is to stop using these ads company and let them die.
Back to AI, there are loads of offline models we can use. Many like Ollama that will even download it. Install Ollama, on the ollama site find a model name and "ollama run model-name" and you can use it.
Ok, it is not as chatgpt5 but it can help you so much, that you might not even need chatgpt.
I've had great success in running Qwen3-8B-GGUF[1] on my RTX 2070 SUPER (8GB VRAM) using Oobabooga (everyone just calls it via the author's name, it's much catchier) so this is definitely doable on consumer hardware. Specifically, I run the Q4_K_M model as Oobabooga loads all of its layers into the GPU by default, making it nice and snappy. (Testing has shown that I can actually load up to the Q6_K model before some layers have to be loaded into the CPU, but I have to manually specify that all those layers should be loaded into the GPU, as opposed to leaving it auto-determined.)
It does obviously hallucinate more often than ChatGPT does, so care should be taken. That said, it's really nice to have something local.
There's a subreddit for running text gen models locally that people might be interested in: https://www.reddit.com/r/LocalLlama
If you are here and you require this reminder I would like to think that you are very lost.
A privilege that is limited to the top 1%. It may come as a surprise, but most people don't have 32GB of VRAM [0]. The rest of us with normal people hardware are stuck with AI cloud providers or good old searching, which is a lot harder now that those same AI providers have ruined search results.
[0] There are some lightweight models you can run on normal people hardware, but they are just too unreliable even for casual usage and are likely to waste more of your time than they save.
You mean “remind”?
What people don't want to do is sign up for yet another subscription. There's immense subscription fatigue among the general population, especially in tough economic times such as now.
I’ve been writing code for many years but one of the areas I wanted to improve was debugging, I’ve always printed variables but last month I decided to start using a debugger instead of logging to the console. For the past weeks I’ve only been using breakpoints and the resume program function because the step-into, over, out functions have always been confusing to me. An hour ago I sent Gemini images of my debugger and explained my problem and it actually told me what to do and it actually explained to me what the step-* functions did and it told me what to do step by step (I sent it a new screenshot after each step and told it to explain to me what was going on).
I now have a much better understanding of how debuggers work thanks to Gemini.
I’m fine with google getting my data, the value I just got was immense.
It may seem obvious, but Sam Altman also recently emphasized that the information you share with ChatGPT is not confidential, and could potentially be used against you in court.
[1] https://www.pcmag.com/news/altman-your-chatgpt-conversations...
[2] https://techcrunch.com/2025/07/25/sam-altman-warns-theres-no...
It would be weird for him not to be transparent about that
https://privacy.anthropic.com/en/articles/10023555-how-do-yo...
> We do not actively set out to collect personal data to train our models
The 'snarky tech guy' tone of the article is a bit like nails on a chalkboard.
https://news.ycombinator.com/item?id=44778764
Interesting how much traction
"[x] Make this chat discoverable (allows it to be shown in web searches)"
gets in news articles.People don't seem to have the same intuition for the web that they used to!
Just as a PSA - there's nothing unique to AIs here - whenever you ask a question of anyone, in any way, they then have the memory of you having asked it. A lot of sitcoms and comedic plays have the plot premise build upon such questions that a person voiced then eventually reaching (either accurately or inaccurately) the person they were hiding the question from.
And as someone who's into spy stories, I know that a big part of tradecraft is of formulating your questions in a way that divulges the least about your actual intentions and current information.
If anything, LLM-driven AIs are the first technology that in principle allow you to ask a complex question that would be immediately forgotten. The thing is that you need to be running the AI yourself; if you ask an AI controlled by another entity, then you're trusting that entity with your question, regardless of whether there's an AI on the way.
This isn't airtight, but it'a a point of principle for most libraries and librarians and they've gone to the mat over this. https://www.newtactics.org/tactics/protecting-right-privacy-...
What was curious about this was that, at the time, there were few dangerous books in libraries. Catcher in the Rye and 1984 was about it. You wouldn't find a large print copy of Che Guevara's Guerrilla Warfare, for instance.
I disagree about how libraries minimise the risk of anyone knowing who is reading what. On the web where so much is tracked by low intelligence marketing people, there is more data than anything that anyone can deal with. In effect, nobody is able to follow you that easily, only machines, with data that humans can't make sense of.
Meanwhile, libraries have had really good IT systems for decades, with everything tracked in a meaningful form with easy lookups. These systems are state owned, therefore it is no problem for a three letter agency to get the information they want from a library.
This, of course, doesn't mean your information is irretrievable by TLAs. But the premise of "tap every library to bypass the legal protections against data harvesting" is much trickier when applied to libraries than when applied to, say, Google. They also aren't meaningfully "state-owned" any more than the local Elk's Club is state-owned; the vast majority of libraries are, at most, a county organ, and it is the particular and peculiar style of governance in the United States that when the Feds come knocking on a county's door, they can also tell them to come back with a warrant. That's if the library is actually government-affiliated at all; many are in fact private organizations that were created by wealthy donors at some point in the past (New York Public Library and the Carnegie Library System are two such examples).
Many libraries also purposefully discard circulation data so as to minimize the surface area of what can be subpoena'd. New York Public Library for example, as a matter of policy, purges the circulation data tied to a person's account soon after each loaned item is returned (https://www.nypl.org/help/about-nypl/legal-notices/privacy-p...).
I feel like most people don't wait until their friends are in the room to ask them questions or exchange info.
Which makes this article quite misleading.
Oh, nice idea. We should all ask that.
Lemee ask ShatGPT how to do that!
Otherwise, I use local for complex for potentially controversial questions.
Worst yet you might individually make a choice to do that but others might not care. They might share email/chats with you to a chatbot to parse it or "make it think like them" and then the chatbot has info about you. So, as much as I understand this sentiment this seems like a losing battle.
In general, it's wise to assume that all web interactions are a two-way street between the user and the service provider.
I wonder how far back this has been going on. Did ICQ, IRC server hosters, BBSes do similar things?
It wasn’t until around 2014 that I stopped building routes that did:
DELETE FROM <table> WHERE id = ? ON DELETE CASCADE;
Just curious what other OS has something similar? MacOS maybe?
The last 10 years of tech "innovation" is basically what the article describes but happening to other tech products[1]. So, why is this fear mongering? It's basically inevitable unless:
a. There's legislation. But I would bet on legislation for the opposite - storing chats forever - instead.
b. AI moves to on-device where users have control of their own data. Also unlikely considering how much tech loves web technologies and recurring revenue streams.
[1] https://www.cam.ac.uk/research/news/menstrual-tracking-app-d...
If I ask for search.brave.com to give me a list of gini coefficients for the top ten countries by GDP, it can't do it. However, if I tell it the data is available on the CIA world factbook, it can then spit that info out promptly. However, if I close the context and ask again, it hasn't learned this information and once again is unable to provide the list.
It didn't datamine me. It had no better idea where to find this information the second time I asked. This is the experience others have stated with other AIs as well. It does not seem special to brave.
I'm not expecting instant. Even next week it won't be there. It's like how AI never learned to count how many times the letter r appears in strawberry. Like sure, now if you ask brave, it will tell you three, but that is only because that question went viral. It didn't "learn" anything, it was just hard coded for that particular answer. Ask it how many times the letter l appears in smallville and it will get it wrong again.
Thanks, that was enlightening.