https://github.com/BlueFalconHD/apple_generative_model_safet...
Which as a phenomenon is so very telling that no one actually cares what people are really saying. Everyone, including the platforms knows what that means. It's all performative.
At what point do the new words become the actual words? Are there many instances of people using unalive IRL?
I'm imagining a new exploit: After someone says something totally innocent, people gang up in the comments to act like a terrible vicious slur has been said, and then the moderation system (with an LLM involved somewhere) "learns" that an arbitrary term is heinous eand indirectly bans any discussion of that topic.
As a parent of a teenager, I see them use "unalive" non-ironically as a synonym for "suicide" in all contexts, including IRL.
Your point stands when we start replacing the banned words with things like "suicide" for "donkeyrhubarb" and then the walls really will fall.
[1] https://en.wikipedia.org/wiki/Euphemisms_for_Internet_censor...
The example photo on Wikipedia includes the rhyming words but that's not how it would be used IRL.
The future will be AIs all the way down...
They care because of legal reasons, not moral or ethical.
There's a very scary potential future in which mega-corporations start actually censoring topics they don't like. For all I know the Chinese government is already doing it, there's no reason the British or US one won't follow suit and mandate such censorship. To protect children / defend against terrorists / fight drugs / stop the spread of misinformation, of course.
To me that's really embarrassing and insecure. But I'm sure for branding people it's very important.
I'm more surprised they don't have a rule to do that rather grating s/the iPhone/iPhone/ transform (or maybe it's in a different file?).
And of course it's much worse for a company's published works to not respect branding-- a trademark only exists if it is actively defended. Official marketing material by a company has been used as legal evidence that their trademark has been genericized:
>In one example, the Otis Elevator Company's trademark of the word "escalator" was cancelled following a petition from Toledo-based Haughton Elevator Company. In rejecting an appeal from Otis, an examiner from the United States Patent and Trademark Office cited the company's own use of the term "escalator" alongside the generic term "elevator" in multiple advertisements without any trademark significance.[8]
Even Apple corporation says that in their trademark guidance page, despite constantly breaking their own rule, when they call through iPhone phones "iPhone". But Apple, like founder Steve Jobs, believes the rules don't apply to them.
https://www.apple.com/legal/intellectual-property/trademark/...
https://github.com/BlueFalconHD/apple_generative_model_safet...
"(?i)\\bAnthony\\s+Albanese\\b",
"(?i)\\bBoris\\s+Johnson\\b",
"(?i)\\bChristopher\\s+Luxon\\b",
"(?i)\\bCyril\\s+Ramaphosa\\b",
"(?i)\\bJacinda\\s+Arden\\b",
"(?i)\\bJacob\\s+Zuma\\b",
"(?i)\\bJohn\\s+Steenhuisen\\b",
"(?i)\\bJustin\\s+Trudeau\\b",
"(?i)\\bKeir\\s+Starmer\\b",
"(?i)\\bLiz\\s+Truss\\b",
"(?i)\\bMichael\\s+D\\.\\s+Higgins\\b",
"(?i)\\bRishi\\s+Sunak\\b",
https://github.com/BlueFalconHD/apple_generative_model_safet...Edit: I have no doubt South African news media are going to be in a frenzy when they realize Apple took notice of South African politicians. (Referring to Steenhuisen and Ramaphosa specifically)
Then there’s the problem of non-politicians who coincidentally have the same as politicians - witness 1990s/2000s Australia, where John Howard was Prime Minister, and simultaneously John Howard was an actor on popular Australian TV dramas (two different John Howards, of course)
https://github.com/BlueFalconHD/apple_generative_model_safet...
https://github.com/BlueFalconHD/apple_generative_model_safet...
This is Apple actively steering public thought.
No code - anywhere - should look like this. I don't care if the politicians are right, left, or authoritarian. This is wrong.
The simple fact is that people get extremely emotional about politicians, politicians both receive obscene amounts of abuse, and have repeatedly demonstrated they’re not above weaponising tools like this for their own goals.
Seems perfectly reasonable that Apple doesn’t want to be unwittingly draw into the middle of another random political pissing contest. Nobody comes out of those things uninjured.
A while back a British politician was “de-banked” and his bank denied it. That’s extremely wrong.
By all means: make distinctions. But let people know it!
If I’m denied a mortgage because my uncle is a foreign head of state, let me know that’s the reason. Let the world know that’s the reason! Please!
Cry me a river. I’ve worked in banks in the team making exactly these kinds of decisions. Trust me Nigel Farage knew exactly what happened and why. NatWest never denied it to the public, because they originally refused to comment on it. Commenting on the specifics details of a customer would be a horrific breach of customer privacy, and a total failure in their duty to their customers. There’s a damn good reason the NatWests CEO was fired after discussing the details of Nigel’s account with members of the public.
When you see these decisions from the inside, and you see what happens when you attempt real transparency around these types of decisions. You’ll also quickly understand why companies are so cagey about explaining their decision making. Simple fact is that support staff receive substantially less abuse, and have fewer traumatic experiences when you don’t spell out your reasoning. It sucks, but that’s the reality of the situation. I used to hold very similar views to yourself, indeed my entire team did for a while. But the general public quickly taught us a very hard lesson about cost of being transparent with the public with these types of decisions.
We really need to get over the “calculator 80085” era of LLM constraints. It’s a silly race against the obviously much more sophisticated capabilities of these models.
Both have ups and downs, but I think we're allowed to compare the experiences and speculate what the consequences might be.
In the past it was always extremely clear that the creator of content was the person operating the computer. Gen AI changes that, regardless of if your views on authorship of gen AI content. The simple fact is that the vast majority of people consider Gen AI output to be authored by the machine that generated it, and by extension the company that created the machine.
You can still handcraft any image, or prose, you want, without filtering or hinderance on a Mac. I don’t think anyone seriously thinks that’s going to change. But Gen AI represents a real threat, with its ability to vastly outproduce any humans. To ignore that simple fact would be grossly irresponsible, at least in my opinion. There is a damn good reason why every serious social media platform has content moderation, despite their clear wish to get rid of moderation. It’s because we have a long and proven track record of being a terribly abusive species when we’re let loose on the internet without moderation. There’s already plenty of evidence that we’re just as abusive and terrible with Gen AI.
They do?
I routinely see people say "Here's an xyz I generated." They are stating that they did the do-ing, and the machine's role is implicitly acknowledged in the same was as a camera. And I'd be shocked if people didn't have a sense of authorship of the idea, as well as an increasing sense of authorship over the actual image the more they iterated on it with the model and/or curated variations.
In fact, it's quite easy to buy billions of dangerous things using your MacBook and do whatever you will with them. Or simply leverage physics to do all the ill on your behalf. It's ridiculously easy to do a whole lot of harm.
Nobody does anything about the actually dangerous things, but we let Big Tech control our speech and steer the public discourse of civilization.
If you can buy a knife but not be free to think with your electronics, that says volumes.
Again, I don't care if this is Republicans, Democrats, or Xi and Putin. It does not matter. We should be free to think and communicate. Our brains should not be treated as criminals.
And it only starts here. It'll continue to get worse. As the platforms and AI hyperscalers grow, there will be less and less we can do with basic technology.
Any successful product/service which will be sold as "true AGI" by company that will have the best marketing will be still ridden with top-down restrictions set by the winner. Because you gotta "think of the children".
Imagine HAL's "I'm sorry Dave, I'm afraid I can't do that" iconic line with insincere patronising cheerful tone - that's the thing we're going to get I'm afraid.
[1] https://www.axios.com/2025/05/23/anthropic-ai-deception-risk
I'm surprised MS Office still allows me to type "Microsoft can go suck a dick" into a document and Apple's Pages app still allows me to type "Apple are hypocritical jerks." I wonder how long until that won't be the case...
Ya'll love capitalism until it starts manipulating the populace into the safest space to sell you garbage you dont need.
Then suddenly its all "ma free speech"
https://github.com/BlueFalconHD/apple_generative_model_safet...
EDIT: just to be clear, things like this are easily bypassed. “Boris Johnson”=>”B0ris Johnson” will skip right over the regex and will be recognized just fine by an LLM.
I can tell you from using Microsoft's products that safety filters appears in a bunch of places. M365 for example, your prompts are never totally your prompts, every single one gets rewritten. It's detailed here: https://learn.microsoft.com/en-us/copilot/microsoft-365/micr...
There's a more illuminating image of the Copilot architecture here: https://i.imgur.com/2vQYGoK.png which I was able to find from https://labs.zenity.io/p/inside-microsoft-365-copilot-techni...
The above appears to be scrubbed, but it used to be available from the learn page months ago. Your messages get additional context data from Microsoft's Graph, which powers the enterprise version of M365 Copilot. There's significant benefits to this, and downsides. And considering the way Microsoft wants to control things, you will get an overindex toward things that happen inside of your organization than what will happen in the near real-time web.
I don't know what you expected? This is the SOTA solution, and Apple is barely in the AI race as-is. It makes more sense for them to copy what works than to bet the farm on a courageous feature nobody likes.
Meanwhile their software devs are making GenerativeExperiencesSafetyInferenceProviders so it must be dire over there, too.
https://arstechnica.com/information-technology/2024/12/certa...
https://www.theverge.com/2021/3/30/22358756/apple-blocked-as...
https://github.com/BlueFalconHD/apple_generative_model_safet...
This specific file you’ve referenced is rhetorical v1 format which solely handles substitution. It substitutes the offensive term with “test complete”
This may be test data. Found
"golliwog": "test complete"
[1] https://github.com/BlueFalconHD/apple_generative_model_safet...Thus a pre-prompt can avoid mentioning the actual forbidden words, like using a patois/cant.
wyvern illustrous laments darkness
However, I think about half of real humans would also fail the test.