A quick look at Mythos run on Firefox: too much hype?
46 points
2 hours ago
| 8 comments
| xark.es
| HN
helsinkiandrew
58 minutes ago
[-]
Whatever the capabilities, there’s always a little hype, or at least the risk won’t be as great as thought:

> Due to our concerns about malicious applications of the technology, we are not releasing the trained model.

That was for GPT-2 https://openai.com/index/better-language-models/

reply
1una
14 minutes ago
[-]
In the same article you linked:

> Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT‑2 along with sampling code .

7 years later, these concerns seem pretty legit.

reply
dwedge
7 minutes ago
[-]
This article felt really informative at first but sone point it was like reading an LLM getting stuck in a circle
reply
goalieca
1 hour ago
[-]
There was a double fronted marketing push by both organizations. That much is true and this makes me more skeptical of the message and how exactly it was framed.

If we just stick with c/c++ systems, pretty much every big enough project has a backlog of thousands of these things. Either simple like compiler warnings for uninitialized values or fancier tool verified off-by-one write errors that aren’t exploitable in practice. There are many real bad things in there, but they’re hidden in the backlog waiting for someone to triage them all.

Most orgs just look at that backlog and just accept it. It takes a pretty big $$$ investment to solve.

I would like to see someone do a big deep dive in the coming weeks.

reply
bblb
9 minutes ago
[-]
Can IDE's be configured so that it won't allow to save the file changes if it contains the usual suspects; buffer overflows and what not. LLM would scan it and deny write operation.

Like the Black formatter for Python code on VSCode, that runs before hitting CTRL+S.

reply
Eufrat
1 hour ago
[-]
Probably worth noting that the new-ish Mozilla CEO, Anthony Enzor-DeMeo, is clearly an AI booster having talked about wanting to make Firefox into a “modern AI browser”. So, I don’t doubt that Anthropic and Mozilla saw an opportunity to make a good bit of copy.

I think this has been pushed too hard, along with general exhaustion at people insisting that AI is eating everything and the moon these claims are getting kind of farcical.

Are LLMs useful to find bugs, maybe? Reading the system card, I guess if you run the source code through the model a 10,000 times, some useful stuff falls out. Is this worth it? I have no idea anymore.

reply
MyFirstSass
31 minutes ago
[-]
Hackernews has also been completely co-opted by boosters.

So much that i don't really visit anymore after 15 years of use.

It's a bizarre situation with billions in marketing and PR, astroturfing and torrents of fake news with streams of comments beneath them with zero skepticism and an almost horrifying worship of these billion dollar companies.

Something completely flipped here at some point, i don't know if it's because YC is also heavily pro these companies, and embedded with them, requiring YC applicants to slop code their way in, then cheering about it.

Either way it's incredibly sad and remind me of the worst casino economy, nft's, crypto, web3 while there's actually an interesting core, regex on steroids with planning aspects, but it's constantly oversold.

I say that as a daily user of Claude Max for over a year.

reply
bawolff
39 minutes ago
[-]
One think to keep in mind is that firefix is probably a pretty hard target. Everyone wants to try and hack a web browser. One assumes the low hanging fruit is mostly gone.

I think the fact this is even a conversation is pretty impressive.

reply
Bishonen88
36 minutes ago
[-]
Probably you're right, but given the browser usage-distribution, I reckon most hackers wouldn't care about firefox at this point and solely concentrate on chrome. I reckon firefox users are on average, more tech savvy and given a hack, would be able to help themselves/find out about the hack quicker than the average chrome user.
reply
nazgu1
1 hour ago
[-]
Why people publish AI written articles? If I would like to read AI I can just prompt it myself, and when I read something on someone blog I expect that I will read thoughts of this particular human being...
reply
Bishonen88
39 minutes ago
[-]
While the text seems to be at least AI-supported, I think the research is still interesting. Whether that was done mostly by the author or an AI still, does not change much to me at least.

I'd appreciate some sort of disclaimer at the start of each article whether it's AI written/assisted or not. But I guess authors understand that it will diminish the perceived value of their work/part.

reply
schnitzelstoat
38 minutes ago
[-]
It’s just marketing. Remember when OpenAI said GPT-2 was too dangerous to release?
reply