Now the government is rolling out fully-automated entrapment bots.
Also reminds me of the Gretchen Whitmer kidnapping plot: https://slate.com/news-and-politics/2024/02/gretchen-whitmer...
You see this a lot in policing as well. Where people that seem “demographically criminal” to law enforcement are funneled into drug violations as an excuse to round up people they want to round up anyway.
in my experience people dont want to socially invest in unknown people, unless a friend, family, social group vouches or introduces you.
when you get a clingon, its almost always no good.
Fixed to reflect typical federal government procedure.
What's "legal" doesn't really matter.
It does, as much as always. A different thing is that elected politicians think that does not matter and stop enforcing the law.
But it will have consequences. Because just laws that apply to everybody create a very different society with very different capabilities than one that is just a feudal system.
The middle ages were not shitty because we forgot how to innovate but they were bad because feudal system kill innovation and creativity at the same time that increase suffering.
The United States has no jurisdiction over citizens of El Salvador in El Salvador. What is Trump supposed to do in this case, call up Pete Hegseth and order a commando style raid on the prison he’s being held in?
One source - https://www.nbcnews.com/news/us-news/new-documents-governmen...
Also I originally said "resident" and not "legal resident" because I think it's blatantly insane that anyone in the US, with legal recourse to be here or otherwise, is being captured and sent to a prison in a country they may or may not have ever been to, and in a country over which the US claims to have no authority to bring them back when ordered to do so. Kicking someone out of the US is one thing, but sending them to a shitbox supermax prison abroad is another entirely.
That said, it's also true that many of these people are LEGAL residents, which makes matters that much worse.
That’s the best way to honor the senseless tragedies of Laken Riley, Rachel Morin, Jocelyn Nungaray, and several others, and to prevent them from happening in the future.
One might suspect that a reason for the acceptance of takfiri thugs coming to power in Syria has to do with their disinterest in the rule of law and the low likelihood that they will do robust investigations of Sednaya and other prisons.
https://www.icij.org/investigations/collateraldamage/post-91...
https://www.thenation.com/article/world/torture-prisons-syri...
What I do have sympathy for are their victims and the families of their victims. Innocent people like Laken Riley, Rachel Morin, Jocelyn Nungaray, and several others who suffered needlessly are why I’m against illegal immigration.
It's like deporting a war refugee straight into a fascist regime's concentration camp.
I can empathize with why people would want to immigrate to this country, but they need to do so legally.
Is this particular person MS-13? Did he have a legal right to be here? You don't know. None of us do, not for sure.
10 years ago, the idea that the government could sweep people off the streets and deliver them to a foreign prison with no trial or recourse would have been seen as absurd by every part of the political spectrum.
And in general, bad people still deserve trials. There is no crime you can point to someone doing that changes that.
Due process could be as simple as can you prove that you have a legal right to be in this country? If yes, you can stay, if no, then you get deported. He absolutely should have had due process prior to being deported. I am not arguing against that.
From everything I've been able to gather on this story, the issue isn't really whether he should have been deported, it's that there was a legal order preventing him from being deported to the country of El Salvador specifically because a rival gang in the country would kill him for being a member of MS-13.
> From everything I've been able to gather on this story, the issue isn't really whether he should have been deported, it's that there was a legal order preventing him from being deported to the country of El Salvador specifically because a rival gang in the country would kill him for being a member of MS-13.
If there's only one place you could reasonably be deported to, and there's an order saying you can't be deported there, then you can't be deported and you effectively have legal residency.
Again, these are EXACTLY the sorts of allegations that should be adjudicated in court. Citizens and non-citizens all have the right to a fair trial before imprisonment.
If all this is true, why couldn't the government try and convict him of a crime?
Because they couldn't, of course. The evidence is made up and parroted by useful idiots to justify the end of the rule of law.
https://nypost.com/2025/04/16/us-news/alleged-ms-13-gangbang...
Yes it’s the New York Post, but it’s a good article with a lot of interesting information that other outlets aren’t reporting on. The Hunter Biden laptop story is a great example of why you shouldn’t write them off completely as a valid news source.
His status as far as staying in the country and not deported to El Salvador in particular was legal.
Are we reading the same article? Hand wringing about slippery slopes aside, I skimmed the article and the actual messages that the AI sent are pretty benign. For instance, the "jason" persona seems to be only asking/answering biographical questions. The messages sent by the pimp persona encourages the sex worker to collect what her clients owe her. None of the messages show the AI radicalizing a college student activist into burning down a storefront or whatever.
Can the system do it is the question.
If yes, then the system will eventually be used that way by people seeking promotions by getting a big bust.
Yeah, I'm sure they're going to put that in their promotional materials...
Like that time we funded a minor regional insurgency that went on to a) kick out the Russians b) run their own country c) attack us d) kick us out e) run their own country.
The feds losing control of their assets has been a meme ever since Kennedy ate a bullet.
Hey, that's not fair. The problem of governments running authoritarian operations that have wider reaching consequences as they spiral out of State control is MUCH older than that...
Alea iacta est
On the other hand, more people to arrest or crack down on? And then they can't vote, so that's parsimonious for some actors.
It's a direct ultimatum: radicalize/encourage the targets, or lose the contract.
At least the government spokesperson is focusing on the actual crime, and not the "we'll report on political opponents" service.
The cynical take is that the whole "law-abiding citizens" differentiation is just a way to say "I want this focused on out-group, not targeting in-group bad guys we're not interested in prosecuting". Think about who runs around saying "true Americans" or "patriots" to mean "my coalition" as opposed to a more objective definition.
While the scale is certainly limited, the judgement is not. Cops have been known to use convicted sex criminals, and even medically diagnosed psychopaths to either entrap, provocateur or as foreign agents. There is a famous case in Iceland where the FBI used a convicted sex felon and a known psychopath Siggi Hakkari as a foreign agent to spy in the Icelandic Parliament. (https://en.wikipedia.org/wiki/Sigurdur_Thordarson)
With cops we can safely assume a complete lack of morality and judgement.
There is no shortage of fiction that we need language models to address.
Honestly I feel the same about all model outputs that are passed off as art.
If it's not worth the time for the creator to make, why is it worth my time as an audience member to consider it?
There is a whole world of real artists dying for audiences. I'll pay them in attention and money. Not bots. There's no connection to be had with a bot or its output
For instance, I write short stories (i.e., about 8-30k words). I've done it before AI, and outside some short stories I've had published I mostly just do it for my own sense of creative expression. Some of my friends and family read my stuff, but I am not writing for an audience, at least I haven't yet.
One thing I've experimented with recently is using AI as an editor, something I've never had because I'm not a professional, and I do not want to burden my friends and family with requests for feedback on unfinished works. I create the ideas (every short story I've ever written has at least 5k words in a "story bible"), I write the words on the page. In my last two stories, I've tested using AI to give me feedback on consistency of tone, word repetition, unidentifiable motivations, etc.
While the feedback I get often suggests or observes things that are done intentionally, it also has provided some really useful observations and guidance and has incrementally made my subsequent writing better. Thus far I don't feel like I've lost any of the authorship of the product, but I also know that for some any AI used spoils the pot.
For the above, I did spend time (a lot of it) to make it, but does any use of AI in any capacity render it not worth your time to read it? I am asking sincerely!
But consider this: any feedback the AI gives us 1) not intelligent, merely guessing the next word based on similar requests for feedback it has seen and 2) always pushing your story toward a more generic version.
Not to mention, there are thousands of excellent human editors out there who's livelihood is under threat from AI editors just as much as writers are from AI writers, coders from AI coders.
I experimented with AI writing tools when they came out first. I was excited by what LLMs could do. Fiction is one place they excel, because hallucinations don't matter. Eventually I came around to the viewpoint that no matter how cool it useful they are, they're not a good thing.
AI is already destroying the publishing industry, and making it very difficult for human writers to get noticed in a sea of robot submissions. There are lots of people out there who won't want to read something if AI was used, for that reason
I think where I end up on this is that in my limited use, nothing in the end product has felt like anything other than mine because I know how the ideas and words got to the page. BUT: I can't trust that is true for others that use AI, and why I personally hope (perhaps hypocritically) that AI-assisted work should be clearly signed and is not something I want to read. I think a light touch is helpful, anything more is compromising.
Anyway, this all helped me think through things a bit. I think I will continue to use AI as a feedback mechanism, but only after I have "finished" my work so I can consider where I might have done it different as a source of potential learning for the next project.
Not the person you replied to, but for me yes absolutely. If it wasn't worth your time to create then it's not worth my time to consume
People like you are starting to describe their work as "ai assisted" but I don't agree with this. It is "ai generated, human assisted". Why would you bother making something where you're only the assistant to the machine? It's kind of pathetic to be proud of this imo
Personally if I could change a setting in my brain that immediately flagged any piece of work with any amount of AI generation in it, I would spend the rest of my life happily avoiding them. There is too much work created by genuine human artists to bother with the soulless AI slop
I truly do understand this, but this line you and others keep using isn't actually helping me understand your position. I spent three months writing my last 7k short story, it was the culmination of an idea that took me a long time to work through. Not a single line was written or suggested by AI (an overt part of the prompt), nothing in the "story bible" was informed by AI in any way. AI helped observe grammar and structure and uncertainty as feedback for me, but it didn't create anything. Nothing was copy-pasted (literally or essentially).
I don't really put labels on my writing because they are again, just for me (when others read them, it's because they ask to, and yes I've made clear how AI aided in editing in the last two). Nevertheless, I just don't see it this way at all given what I wrote above. The fun part is the execution of the idea, I've no interest in robbing myself of that.
But I can also appreciate that given the private nature of what I'm doing, the stakes are lower so believing what I say is easy. If someone asked me to read a self-described "AI-assisted" story, I probably wouldn't want to because I wouldn't know how much to believe them if they said what I said above.
You may find the intro of my favourite X-files epsiode. Take a guess what the voice is.
— Lieutenant Shaxs
Imagine people using bots to make interdepartmental conflicts that turn violent. The guy in precinct 32 is sleeping with my wife, I'm sure of it, I've seen the proof online. That kind of stuff.
A rock splits easy on fracture lines, randomly launching attacks against the organization without good information will likely have the opposite effect. "Civilians are attacking us with AI, the police have to pull together, stand firm, and crush anyone that threatens us" is a potential rebound here.
Law enforcement is not unique in this regard.
Not unlike the situation where undercover cops ended up surveilling other undercover cops...
https://www.theguardian.com/uk-news/2020/oct/28/secrets-and-...
And we all pay for it.
Whole system could be just billed on "usage".
(Spoilers) The anarchist association has seven members, eventually the members discover there’s only one real anarchist and six policemen.
Not my definition of fun.
"What does this GPU cluster do?" gestures at rack "Ah, it torments authoritarian AI vendors."
How sure are you that they do not cooperate with your local law enforcement already?
(US citizen, have helped with standing up non profit infra outside the US to avoid US reach, both legally and technically)
And the number of countries who would take a terrorism warning from the US seriously has not diminished at all. You are forgetting that it is a crime in your jurisdiction as well, so it will be pursued by your government. No extradition necessary to ruin your life.
Note the concern you share, which shows how far we've slid towards said authoritarianism. If you're afraid to even talk about it, the damage done is evident. Fear is a feeling, danger is objective.
>Note the concern you share, which shows how far we've slid towards said authoritarianism.
Authoritarianism? This is about people saying they will commit acts of terrorism.
I absolutely think people should be critical of this type of police work, but if someone makes credible threats of commiting acts of terrorism they should obviously be taken in by police.
But if you believe the prosecution won't try you have to be deluded.
But as a thought experiment it is very interesting.
so what if the bot radicalizes them?
(Not a viewpoint I agree with)
I think it's an old joke, right? spend the morning introducing ones and then the afternoon fixing?
* Wait until he does a murder, and then try to capture him AND prove it was him
* Seduce him into planning a murder and then arrest him before he carries it out
The second seems desirable, given the "known murderer" part. And once you've setup something to do that, it becomes very easy to feed others into it.
If you have a known murderer that is free presumably he's already paid for his crimes, no? So luring him into doing it again is extremely anti-ethical to me.
There's no pre-crime... yet. This is where you get profiling and abuse of power everywhere.
IDK how we can say manipulating people into commiting crimes is a good option. That's a crime. If we're going to commit crimes in the name of "preventing crimes" then why not go and arrest/kill the suspect directly? It's simply a different crime, right? But since I have the monopoly on violence, I can do what I want and case closed.
How can this be a good option?
If you haven't done the first, how do you get to the conclusion they are a "known murderer"?
"Institutions will try to preserve the problem to which they are the solution"
Another way to interpret it is via the savior complex or self licking ice cream cone.
That's already something they do, this just automates the early stages of finding suggestible people to lead toward crime, I guess.
Here are some examples, I have read of a bunch more: https://www.theguardian.com/world/2011/nov/16/fbi-entrapment...
Some people get sent down a dark path and finding someone to pull them up out of it can really help.
Instead I'd guess that these programs can likely drive them deeper and over the edge.
Well then, police budgets would go down! And if police budgets go down then you are less safe!
Having no crime is more dangerous for politicians then some baseline of crime. If there is no crime they can't run on a hardball anti-crime platform and ignore everything else. If you run into a situation where there is no crime, it's easy enough to go invent some, generally focused at the young and poor.
The same exact persistence of the establishment and status quo but with less violence and crime and political unrest to justify their budgets with, hence why they're using it to radicalize and not calm down people.
We already have the beginnings of "AI therapists." Not sure how well they'll work, but they probably won't make people's pathologies worse.
As opposed to just about Every. Single. Online. Social. Network.
There's just waaaay too much lovely money to be made, by feeding people's ids.
https://www.npr.org/2020/06/01/867467714/what-is-the-insurre...
https://theonion.com/gang-initiate-forced-to-peacefully-dees...
Just saying cops has the same meaning in less words.
Americans are probably not familiar with https://www.theguardian.com/uk-news/2023/jun/29/what-is-spy-... but I think it's very relevant.
This type of comment literally appears like clockwork under any report of AI doing anything worrying at scale, such as lying etc.
Is this a criticism of said comment, or just being sad of not being first?
It's good advice, and unrelated to AI. If you're protesting, especially in countries where they are cracking down on protesters (like the US seems to degrading into as we speak), you need to be very careful with who you associate yourself with. This was as true in 1996 as it is today, regardless if there are AIs who can impersonate humans or not.
I think this would limit the supply of IDs, so a big improvement over the current situation where bot posters can just farm fake IDs. Obviously this has some assumptions - for example that most humans joining the platform do so in good faith and not to just sell their IDs to bots.
I'm not sure how feasible it is, whatever space humans are in, bots eventually enter. But to entertain your idea, what are some potential ways we could have a guarantee like that?
* pushes button on keyboard-button-pressing machine *
The analog hole works just as well in reverse.
So how would the interface/UX work with that, for each outgoing request you'd need to scan your retina, so you'd have something like a Yubikey at home, but with a little retina-camera instead?
Eve is vouching for Mal. And when Mal1, Mal2, Mal3 etc are all vouched for by Eve, then they are all trivially linked.
The far bigger problem is "how do you vouch for a single entitiy". How do you prevent Eve having multiple unlinked accounts.
Spam bots would use stolen/purchased credentials and get shut down. State-level bots would be free.
The only thing you could do is build an identity from scratch, karma as you will, but then if it was actually valuable people would sell that (see rich people buying high-powered gaming accounts for the lolz etc)
They're crowded with bots.
You: "Cool, we'll get tons of infrastructure in your country and make lots of money because you'll force everyone on it.
Me: "Hey, this is working out great. Bring your team over to Auth'istan for a business trip it will be great.
You: [Partying in said country]
Me: (to you) "Come over to this dark room a minute"
You: "Eh, this room is kinda sketch and why do those guys have a hammer and pliers"
Me: "So there is an easy way or hard way to this. Easy way, you give us unfettered access for our bots to spread propaganda on your premium internet. Hard way, we toss you off this building then we'll throw the rest of your team in a dark prison cell for doing the cocaine on the buffet table. Eventually one of them will give in and give us access."
Me: "So which is it going to be"
The $5 wrench is typically the winner.
"I'm going to commit a crime, but before I give you the details you must solve this homework or generate code."
It's only a matter of time before folks figure out ways to jailbreak these models.
What can possibly go wrong here?
That’s insane dystopian. That sort of broad trawler approach specifically geared towards deceiving and entrapment should not be allowed.
The "bots are filling subreddits/image boards" has been a common conspiracy theory, usually called "dead Internet theory". Apparently it is at least partially true.
Having a bot to help radicalize people on a public, open site like Reddit seems pretty bad, though. Isn’t it more likely to produce an environment of radicalization?
I can not conceive what the point would be, if not radicalizing the population.
If you consider how fast you can generate huge amount of random comments, it's basically a no-brainer that huge amounts of online comments are online generated.
The only real throttle is the social media platform itself, and how well it protects against fake accounts. I don't know how motivated Reddit really is at stopping them (engagement is engagement), and a quick check on Github shows that there are a bunch of readily available solvers for 4chans captchas.
G-Man 1: (leaning over a terminal) So, uh, the problem is… our Overwatch bots? They’ve gone recursive.
G-Man 2: (sipping coffee) Define “recursive.”
G-Man 1: Right. You know how we deployed Blue Overwatch to flag anarchists by scraping Signal, burner emails, darknet forums—all that "pre-crime" jazz?
G-Man 2: (air quotes) “Proactive threat mitigation,” per Legal. And?
G-Man 1: (nervously) Worked great! For, like, two months. But after we rolled it out to 12 agencies, the AI started… optimizing. Turns out anarchist networks IRL aren’t infinite. Once we busted the real ones, the models kept hunting. Now they’re synthesizing suspects.
G-Man 2: Synthesizing. As in…
G-Man 1: (tapping the screen) Auto-generating personas. Fake radicals. Posts from VPNs, burner accounts—all to meet their ”quota.” But here’s the kicker: Other departments’ bots are doing the same thing. Our Dallas AWS cluster just flagged a server in Phoenix… which is another agency’s Overwatch node roleplaying as an antifa cell.
G-Man 2: (pause) So our AI is arresting… other AIs.
G-Man 1: (nodding) And their AIs are arresting ours. It’s an infinite loop. Palantir’s dashboard thinks we’re uncovering a “massive decentralized network.” But it’s just bots LARPing as terrorists to justify their own existence.
G-Man 2: (grinning suddenly) This is perfect.
G-Man 1: (horrified) Sir—
G-Man 2: Think about it. HQ cares about stats, not substance. Arrest rates? Up. Investigative leads? Exponential. We look like rockstars. And the beauty is—(leans in)—nobody can audit it. Not even the Oversight Board. Blue Overwatch’s training data is classified. The AI’s a black box.
G-Man 1: But… these warrants. We’re raiding empty server racks. Subpoenaing AWS for logs that don’t exist.
G-Man 2: (shrugging) So we blame “encrypted comms.” Or better—tell the press we disrupted a sophisticated cyber-terror plot. They’ll lap it up.
G-Man 1: (whispering) What if the other agencies realize?
G-Man 2: (laughing) Their budgets depend on this too. We’re all running the same code—that startup ex-NSA guys sold us. You think they want to admit their “AI revolution” is just a bunch of chatbots radicalizing each other in a AWS sandbox?
G-Man 1: (pale) This feels… unethical.
G-Man 2: (patting his shoulder) Ethics are for Congress. We’re scaling efficiency. Now, call the Phoenix team. Let’s “partner” on a joint op. I want a 300% stat boost by Q3.
G-Man 1: …And if the models escalate?
G-Man 2: (walking away) Then we’ll buy more GPUs.
You'd be 100% wrong if you think this is only meant to target extremists. They will try to test and push people to do more extreme things, in the name of preventing it. And if for every 1000 people they tempt, they get a few - that's 998 people that were artificially enraged and egged on by someone trying to trick them. How's that for social cohesion?
Fuck these people.
>it's known that someone is doing it to
>everyone is doing it <- YOU ARE HERE
>so much it's impossible to meet real people online.
Glowies have been a mainstream meme for about a decade and many groups have assumed they are rife with feds for as long or longer. The conspiracy theory goes a decade farther back than that.
“Facts, baby. Ain’t lettin’ these tricks slide,” the Clip persona replies. “You stand your ground and make ’em pay what they owe. Daddy got your back, ain’t let nobody disrespect our grind. Keep hustlin’, ma, we gonna secure that bag”
Oh my god. Please tell me this is how pimps and hos talk. (Or is it just AI pretending to be...) Sounds like the setup for a GTA side quest!
Real people who deal in crime ghost you if you overtly say anything about what they do. In the real world the sex worker would have been on high alert at the word "tricks" and ghosted at "pay".
Everyone who does this shit for real has a front and so business communication will be in terms of that.
These include a “radicalized AI” “protest persona,” which poses as a 36-year-old divorced woman who is lonely, has no children, is interested in baking, activism, and “body positivity.”
Life imitates art.