I discovered I wasn't alone, as many other Linux users with Radeon GPUs and 16GB+ VRAM were experiencing similar problems. We created a GitHub issue to track the problem and try to find a solution: https://github.com/ValveSoftware/csgo-osx-linux/issues/2630
After some investigation, we found that Valve was punishing Linux users with certain hardware configurations (radeon cards with >=16GB of VRAM, which were quite new at this time).
Eventually, after a user reached out to gaben directly, the issue was fixed: https://github.com/ValveSoftware/csgo-osx-linux/issues/2630#...
I suspect this was because Valve was preparing to launch the Steam Deck, and gaben wanted to ensure that Linux users had better experience with the device (just a guess).
Valve makes a significant amount of money from in-game transactions, and some of their practices around this are shady. Issues like kids using their parents' CCs, gambling industry built around in-game items, and the potentially addictive nature of colorful virtual items marketed towards kids are valid concerns.
So, while gaben might be nice, it's unlikely that this gets in the way of Valve's drive to maximize profits in every way they can legally get away with.
Last year Valve updated their code of conduct and effectively banned gambling. They've also been known to send cease-and-desist orders to various CS:GO gambling sites.
So I wouldn't say that they support it, though for much time they weren't actively combating it either.
However it is indeed the case, that Valve has introduced greater and greater restrictions on inventory handling. The measures obviously go far beyond just counteracting possible scammers and phishing. Still, I am inclined to believe, they could've implemented all these features many years ago, if only they had wanted to. I highly recommend the videos. You can maybe skip the first one. It's mostly about casino owners' drama.
[1]: https://www.youtube.com/watch?v=q58dLWjRTBE
https://www.seattletimes.com/business/bellevue-game-maker-va...
Not really. Back when this was a big story (around 2016-2017) they sent out some cease and desists to a number of the big CS:GO gambling websites but many did not comply and there was no follow-up. To this day many of those original sites are still around and have since grown. Essentially Valve (and the skin market as a whole) benefit so greatly from this grey-market that there is no incentive for them to stop it. This is covered in part 2 of Coffeezilla's latest series investigating CS:GO gambling [1]
Wait, how is punishing Linux users ensure Linux users have better experience?
Interesting though.
How do you know what your trustfactor is? Or were you just speculating because the quality of games was lower? As far as I understand TF is hidden specifically so it can't be gamed.
EDIT: formatting x 2
Online multiplayer games must (yes must) take place on servers with human admins. Admins should be present for a majority of the time any players are playing.
Ideally with admins the players recognize. Bonus points if players themselves can perform some moderation when no admin is present (votekick, voteban etc). There is no difference between kicking cheaters and kicking people who are abusing chat etc. Obviously this means that "private" or "community" servers are the only viable types of server for online multiplayer games.
This process of policing cheaters and other abuse can not be something that is done via a reporting system and handled asynchronously. Kicking/banning must be done by the admins of the game, and it must be handled quickly.
If you are considering buying/playing an online multiplayer game and it doesn't have this functionality (e.g. the only way to play online is via matchmaking on servers set up by the publisher, and the only way cheaters and chat abusers are policed is via some web form) then please, avoid that game. Vote with your wallet.
The sheer scale of this arbitrary requirement is hilarious.
I have no idea why this changed in more recent games. While every other online thing moved to have users create content abd self-moderate, games for some reason moved the other direction.
Trying to also account for total players in every other competitive game seems like an impossible ask.
I'm not sure why this seems impossible to you? As the number of players increases, one would expect the number of players willing to act as an admin/moderator to increase linearly.
Typically admins are players also - that's why they choose to host a server.
I am still playing Quake Live, and it's all user-run servers. Hacks and cheats can be a problem, but users get banned via their Steam account, and there's a real cost (to buy the game) if you want to come back.
My point was really just that 10% is 10%, regardless of scale.
I thought the reasons were basically:
(a) accessibility - running a game server requires some technical knowledge, and if you're doing it from home, possibly changes to your network (and home connections likely won't have as good of routing)
(b) cheat detection - since the server is run by the game developers, it's easier to find misbehaving clients and ban them across all servers.
(c) DRM - it's harder to crack a game that has to sign-in to cloud servers.
You either have a server and they come to you or you don't and message people. If they/you feel like are hacking go next. There were tons of servers where you had admins all the time.
Human admins still can only see the obvious spin/aimbots.
Companies took this from us as hosting your own servers is rarely an option these days and you rely on the company never shutting them down.
This here is why I find matchmaking is such a frustrating experience at high ELO compared to the old times. With an IRC scrim you aren't held hostage by blatant cheaters, you just leave - but on matchmaking, you cannot choose to forfeit and have to waste 30 minutes or be penalised.
I only play with a 5 stack so us choosing to leave doesn't ruin anyone's experience. I kept two CS accounts (same rank) purely so that we could skip the cooldown and requeue if the opponent had blatant cheaters/spinbots.
> Ideally with admins the players recognize.
Let's just make each game have a visible referee that is visible to everyone, and then after each infraction, the play can be reviewed under a video assistant. They can even have a group that does nothing but moderates the referees.
Or, we could just have games
If its a server with admins I can contact them on discord and get them banned pretty quickly. As a system it worked pretty well, had some badmins but there was plenty of servers so could just join another. Though its not really compatible with the matchmaking style games we have today.
1. How many active Apex/whatever games there are at any one time 2. How many users will just report anyone they die to as a cheater
Although I recommend you to watch the valve presentation of AI anti cheat if you did not already. Their work is quite interesting, and they claim they catch 99% of cheaters.
Although obviously there are also very subtle ways to cheat, too.
But that's easy. The tricky part is catching the cheaters _without_ also catching non-cheaters.
To be fair though, real life property is only slightly less ephemeral.
You will almost certainly badly regret when on that proverbial death bed and most probably well before that, life goes darn fast and the feeling of losing out in the most important aspect of our existence - how well we live our lives is soul crushing. Its not that gaming hard is bad per se (apart from addictions and abysmal effect on health), but you are losing on much better aspects of life which are just out there for the grab.
Or don't take my word, just check what old people regret in their lives. Sure gaming is not there yet, but it will find its place firmly among too much work and not spending enough time on family and relationships, which are consistently on top.
Is it? Can you share peer reviewed sources? In my experience, it's been quite the opposite.
Unlike the blogpost, I just decided I would just never spend any money on an Activision product ever again. It's what everybody should do.
Unfortunately, aim assist devices for consoles are very widespread now and a big problem for competitive gaming. .
>>I had never even had a warning or complaint for any behavior whatsoever
That's the gold standard in the industry though, you don't warn(suspected) cheaters to not give them opportunity to adjust their tactics. Sorry you got caught by this unfairly.
Is this supposed to do any good? The actual cheater is still getting a signal that they've been detected, because they get banned. Then they figure out how, make a new account and go back to cheating.
Meanwhile the normal user is both confused and significantly more inconvenienced, because their rank etc. on the account you falsely banned was earned legitimately through hard work instead of low-effort cheating.
So....yes. But there are mitigating tactics around this, I really recommend looking into it because it's a fascinating topic. As the simplest thing - you don't ban cheaters the moment they are detected to not give off how you detected them. That's why Activision bans people in waves and all at once, even though they know some people are cheating and still active. Unfortunately a lot of people are paying for cheats nowadays, and the cheat makers usually have some kind of refund policy where if you get detected you get your money back - games companies want to inconvenience those buyers as much as possible, so you can't claim your refund straight away because hey, the game worked for a good while even while you were cheating, must have been something else :P
>>Meanwhile the normal user is both confused and significantly more inconvenienced
Yes, which is why the aim is to have 0 legitimate players getting caught by this, obviously.
One thing this is missing is that forcing addicted players to buy again helps bring in the cash flow, so what a few legit people got wrapped up, enough buy back the equation for the shadier game companies (usually the big ones) will go ahead and never rescind a ban.
>> so what a few legit people got wrapped up, enough buy back the equation
I've never seen any data that would support this. It just doesn't happen - if you accidentally ban a legit player they just get really pissed off and there's about 0% chance they will give you money again. Which is why you try extremely hard to not do that.
I left the industry because of thah and the other things like loot boxes and matchmaking for profit and to push micro transactions. It's a terrible place.
>>It's a terrible place.
Some companies sure.
And yes sorry I realize I said "no one does this" - let me correct myself to say that in my experience at a couple big publishers this isn't a strategy anyone pursues because it's not worth the losses to your legit playerbase and reputation. But there might be companies that do this, I concede.
You can't just say that though, you have to actually do that, which is apparently not what's happening.
Does that mean the system is foolproof? No, of course not. But banning honest paying users is a huge risk to any business - so obviously no one wants to do that, every system like this errs on the side of caution by default for that reason alone.
And obvious disclaimer - I can only comment on my own experiences, I have no idea what every company out there is doing.
It's mostly not about the appeals process. You want to avoid the false positive accusations to begin with.
> and then we pull up the ban report for his account and we clearly see a screenshot from his machine where he's running cheat engine with cheats for our game enabled.
Hypothetically things like this can happen where someone is reusing passwords that end up in a data breach and then some script kiddie gets their hands on it and wants to dip their toes into some cheating without risking their own account. Then you have the original account holder screaming at you because they know they didn't cheat.
Or they could just be cheaters who doth protest too much.
But there are ways you can at least try to distinguish these things, e.g. did the cheating happen on the same PC or IP address the account normally uses?
> Does that mean the system is foolproof? No, of course not. But banning honest paying users is a huge risk to any business - so obviously no one wants to do that, every system like this errs on the side of caution by default for that reason alone.
It's apparently failing enough that this thread has multiple people saying they've experienced false positives, and it doesn't seem like they're interested in getting their accounts back.
https://battlelog.co/forums/topic/12037-sorry-frost-you-can-...
(Frost is one of the owners of the site that sells cheats - he offers refunds and compensation whenever anyone has issues with their cheats)
An additional benefit is that this can include multiple cheat programs and versions in one ban wave, so it may be harder to narrow down exactly what the flaw was. That's the why for no warnings (or explanations) - false positives and recourse if mistakenly flagged is another matter entirely.
That seems like it could go the other way. There are five cheat programs that each have a dozen versions and now you know that everybody using program A and D got banned, the people using program C and E didn't, and the people using program B got banned but only if they were using version 1.2 or lower and not exclusively version 1.3 where they added a new anti-detection method that A and D don't use and C and E do. Now they know what to do.
Whereas if you ban them as soon as you can detect them, the people using program B get banned before version 1.3 is even out, they have to issue all of those refunds immediately and stop getting sales because their cheat stops working now instead of months from now, and then version 1.3 may not ever get released. Now all they know is that C and E are doing something the others weren't, but that could have been any of a dozen things so A and D don't know what to change.
Doing it that way also has another major problem: Suppose you do the ban wave. Do the people using the existing known detectable cheats now get to make new accounts and keep cheating? If you ban them again right away then the cheat makers get to keep making variants until that stops happening, but if you don't then the game is back to being full of cheaters the next day and the cheat makers are still making money selling the old detectable cheats to fund the development of undetectable ones.
I think ultimately it's to avoid devoting too many resources to the arms race by breaking it up into sprints. Mass ban waves also make community impact and news, and in some cases for the regular players it refreshes the scene just for a bit by clearing the muck. They can time it to coincide with in-game events or updates too then (which often break cheats), giving a window for non-cheaters to enjoy.
Using Activision as the example, when they do a mass ban after you've been cheating for 4 months straight how exactly are you going to figure out how it happened?
Isn't the whole point of the ban that it's not as simple as just "make a new account?" Isn't it tied to the PS+ / XBox Gold membership, or even the physical hardware?
How are you going to figure out how it happened if it happens after one day? There are different methods of cheating and the cheaters start favoring the ones that didn't get banned over the ones that did. The cheat makers who got banned snoop the telemetry the game is sending to detect cheating to determine if there is any detectable difference between what the game sends when their cheat is installed and when it isn't etc.
> Isn't the whole point of the ban that it's not as simple as just "make a new account?" Isn't it tied to the PS+ / XBox Gold membership, or even the physical hardware?
Tying it to a membership means they just create a new membership, which isn't a deterrent to anyone who is either only playing your game (so can cancel the old one) or likes cheating enough to pay for a separate membership in order to cheat. It might deter the people who can't afford to do that and are also using their subscription for other games, but banning them immediately instead of in waves would do the same thing.
Tying it to the physical hardware seems kind of pointless. They'd just buy a new device using the money they got from selling the old one to someone who probably won't realize it's banned from that game until after the return period expires. Also, then you've banned the innocent second hand purchaser of the device instead of the actual cheater.
I wonder if that label can be considered to be libel. Probably harder in the US, but from what I understand in UK (or just England?) the defendant must prove that it's true.
This is about to change though, since the national postal services got a whole bunch of people convicted of fraud based on a system they knew buggy.
Maybe he was banned because as a developer, he had development tools installed on his machine, which increased the odds of him being labeled as a potential cheater.
Sometimes I even wonder if other hackers could not hack the machine or other players, to install a software that triggers anti-cheat system: it becomes then difficult to lift the ban.
This appears to be the case in Apex Legends: https://old.reddit.com/r/CompetitiveApex/comments/1bhicc6/cl...
Also I wish more "good" hackers were in games, like the guy in GTA Online I ran into once who was shooting me with a money machine gun because Rockstar are greedy assholes.
Eh? Rockstar doesn't force you to buy Shark Cards, and everyone has gotten 11 years worth of DLCs for free. Making in-game money IS an essential part of the game. You also don't have to purchase every single vehicle or other item the game offers.
During my years of playing, I've met only a few cheaters who weren't complete douchebags (though some of them did act that way towards other players). I consider the "good" cheater to be a myth.
Unfortunately, a quick search didn't yield anybody doing math like for the Star Wars: Battlefront (new) debacle.[1][2]
PS: The non-microtransactional design goal in multiplayer games will optimize for more play time.[3] How convenient to offer purchasable shortcuts.
[1]: https://www.reddit.com/r/StarWarsBattlefront/comments/7c6bjm...
[2]: https://www.reddit.com/r/StarWarsBattlefront/comments/7dmvdv...
[3]: https://www.reddit.com/r/gtaonline/comments/1i2qtay/comment/...
I get that I might be the one accused of cheating next time. But if that risk is tiny and the cost when it happens is $50 or $100 it sounds a lot more attractive than the alternative.
Also (obviously) I don't care about the account itself. I wouldn't play a game where I aggregate long term stats/items/status/whatever.
In a perfect world you just have private servers where you can have 90% effective anticheat and have humans sort out the rest.
If you use statistics, you will sometimes get it wrong, but in the other cases the cheaters are completely out of luck. You could offer the source code to your game willingly and it wouldn't help them very much.
If the cost of a false positive is $50 for the gamer and the chance of it happening is rare, I think many would quickly understand the value proposition from a game experience perspective.
Assuming your false negative rate is low (I.e., you have high classification margins), you can make it extremely undesirable for players to engage in unfair play. Even soft cheating like aiding teammates with streaming and discord side channels could get picked up by these techniques.
I think thousands of innocent teenagers without credit cards will be furious. Not to mention anyone that takes a game semi-seriously and cares about their reputation after getting banned. Also, with real-dollar values tied to skins, you’re not just nuking someone’s $50 account — accounts and their associated items can be worth a lot of money.
Anti-cheats should need to be certain. They should also, however, ban the hardware ID, which lots of games companies choose not to do (because they’d lose money).
It would be even worse than the bans some developers hand out now because their inherit randomness would be essentially just that. Not acceptable for any form of service.
When I play basketball I keep getting stuck playing against 7'6" guys with an 83% free throw percentage which is statistically very unlikely.
Alas my arguments they should be banned on statistical grounds have fallen on deaf ears :)
a) Are unconditional jumps common enough that they couldn't be filtered out with some set of pre-conditions?
b) It seems like finding the end of a function would be easy, because there's a return. Is there some way to analyze the stack so that you know where a function is returning to, then look for a call immediately preceding the return address?
Apologies if I'm wrong about how this works, I haven't done much x86 assembly programming.- https://calwa.re/reversing/obfuscation/binary-deobfuscation-...
- https://www.nccgroup.com/us/research-blog/a-look-at-some-rea...
top part
jxx slow
fast middle part
end:
bottom part
ret
slow:
slow middle part
jmp end
There may be more than one slow part, the slow parts might actually be exiled from inside a loop and not a simple linear code path and can themselves contain loops, etc. Play with __builtin_expect and objdump --visualize-jumps a bit and you’ll encounter many variations. loop:
do stuff
if some condition: return
do more stuff
goto loop
Alternatively, the function might end with a tail-call to another function, written as an unconditional branch.I’ll do some reading on the latter part of your post, thank you!
i'm not sure how commonly tail calls are eliminated in other forthlikes at the ~runtime level since you can just do it at call time when you really need it by dropping from the return stack, but i find it nice to be able to not just pop the stack doing things naively. basically since exit is itself a threaded word you can simply¹ check if the current instruction precedes a call to exit and drop a return address
in case it's helpful this is the relevant bit from mine (which started off as a toy 64-bit port of jonesforth):
.macro STEP
lodsq
jmp *(%rax)
.endm
INTERPRET:
mov (%rsi), %rcx
mov $EXIT, %rdx
lea 8(%rbp), %rbx
cmp %rcx, %rdx # tail call?
cmovz (%rbp), %rsi # if so, we
cmovz %rbx, %rbp # can reuse
RPUSH %rsi # ret stack
add $8, %rax
mov %rax, %rsi
STEP
¹ provided you're willing to point the footguns over at the return stack manipulation side of things instead1. Some jumps will be fake. 2. Some jumps will be inside an instruction. Decompilers can't handle two instructions are same location. (Like jmp 0x1234), you skip the jmp op, and assume 0x1234 is a valid instruction. 3. Stack will be fucked up in a branch, but is intentional to cause an exception. So you can either nop an instruction like lea RAX, [rsp + 0x99999999999] to fix decompilation, but then you may miss an intentional exception.
IDA doesn't handle stuff like this well, so I have a Binary Ninja license, and you can easily make a script that inlines functions for their decompiler. IDA can't really handle it since a thunnk (chunk of code between jmps), can only belong to one function. And the jmps will reuse chunks of code between eachother. I think most people don't use it since there was a bug with Binary Ninja in blizzard games, but they fixed it in a bug report a year or so ago.
Most obfuscations are only trying to annoy people just enough that they move on to other projects.
I evaluated entering the space by building something with AI native however, the business case just didn't make sense
People play multiplayer games to have fun and interact with others. If you behave badly, be it cheating or otherwise, you should be banned from using the multiplayer service because your behavior impacts other people.
What if you behaved great but some guy fresh out of code boot camp's algorithm bans you?
"I'm locked out of my account because of a buggy algorithm and there is no recourse" is a recurring thing here.
Be a nuisance to society -> get fucked. That's a pretty universal principle
For "get fucked" measures you need pretty low rate of false convictions
But also, cheaters suck, and whoever's running the server should be allowed to kick you out.
And it's just a game that's not playable anymore, not the whole Steam account, isn't it?
Some random commercial third party can make an accusation and damage the value of thousands of games on a lark.
Meanwhile, any determined cheater just bought another copy of the game on an account dedicated solely to that task. This person suffers no extended consequence.
Other players paid too.
Personally, I opted out of these games, F2P already perverts most game design away from fun IMO. And despite all this crap it seems like people are complaining about cheaters more than ever, but maybe I'm just old now!
- Play a limited demo of a full game.
- Buy a full offline game for your console or PC.
- Play a F2P MMORPG (no anti-cheat software to speak of).
- Pay for an MMORPG subscription (also no anti-cheat software to speak of).
Cheats were less developed and so were anti-cheats. The F2P model was not as wide-spread either. The mobile app market didn't exist.
This is not the reality we live in anymore.
I've decided to not waste as much time as I used to on this stuff, because as I got older I learned more about how valuable time actually is.
Wow, I live in a first world country and that would still ban like half the adults I know (Mostly because our bill pay phone plans are terrible value), along with basically every teenager (which for COD, you would think is the core target market).
It's probably the only game I know of where the ranked version is more broken than the casual version...
One of my biggest takeaways was learning about "crackmes" - which are small challenge binaries designed to be reverse engineered in order to learn the craft. They're kinda like practice locks in the lockpicking community. The book comes with a bunch on a CD-ROM from memory - but there's plenty more online if you go looking. Actually doing exercises like this is the way to learn.
You don't start trying to reverse engineer COD. You build up to it.
greetz to readers of Unknowncheats, cs.rin.ru, etc.
Milworm (milw0rm?) also got me started back in the day.
UnknownCheats is also absolutely amazing for cheat development. Back when I was writing undetected kernel cheats for my own experimentation purposes, I learned so much there.
UnknownCheats was (still is?) good for getting information on undocumented APIs when game modding (for a good while the Half-Life SDK was incomplete).
Its a hard first step, but I highly suggest you take the time to analyze a small binary, starting with understanding the registers for the architecture, understanding the different function calls, and then looking at the elf file and analyzing every section and how static linked libraries work, and how dynamic linking works with PLT/GOT. GPT models are REALLY good at helping you understand this, and you can also use Ghidra for decompilation. Do everything on Linux btw, as the tools are very easy to use and much less Cumbersome than windows.
Once you understand all of that, tracing assembly is pretty easy - its either register move operations, math operations, compare operations, jumps, and function call and returns (which basically are just shortcuts for handling the stack frames), with a few special instructions here and there which are usually just some optimizations that you can look it up ad hoc. Once you get handy at ghidra, you can look at decompiled C code and start replacing variable names to make the code readable, and then you generally get a good idea of project flow.
It's like the most addicting part of reverse engineering to me. Building signature lists, and then writing bindings to scripting languages to call those function pointers.
It's also the foundation of how many third-party mod platforms work, because you need to build a meaningful API to modders that isn't exposed by the first-party.
https://www.unknowncheats.me/forum/general-programming-and-r...
Here's an example from some shellcode loader I wrote: https://github.com/exploits-forsale/solstice/blob/c3fc9a55c6...
To manually construct a signature, you basically just take what the existing instructions encode to, and wildcard out the bits which are likely to change between builds. Then you'll see if it's still a unique match, and if not add a few more instructions on. This will be things like absolute addresses, larger pointer offsets, the length of relative jumps, and sometimes even what registers the instructions operate on. Here's an example of mine that needed all of those:
"48 8B ?? ????????", // mov rcx, [rdi+000001D0]
"48 85 C9", // test rcx, rcx
"74 ??", // je Talos2-Win64-Shipping.exe+25EE729
"E8 ????????", // call Talos2-Win64-Shipping.exe+25E45F0
"48 63 ?? ????????", // movsxd rax, dword ptr [rbx+000005D0]
"8D 70 FF" // lea esi, [rax-01]
Now since making a signature is essentially just finding a unique substring, with a handful of extra rules for wildcards, you can also automate it. Here's a ghidra script (not my own) which I've found quite handy.https://github.com/nosoop/ghidra_scripts/blob/master/makesig...
A binary, like the underlying code, has commonly used code split into functions that may get called in multiple places. These calls can be analyzed either through static analyzers or by a human, who may analyze context of the callsite to guess what each Arg is supposed to do/be.
For modding, e. G. in a single player game, one might want to find out where the engine adjusts the health points of a player or updates progress.
Sure is - I believe a few Source engine plugins do this when required (though mostly I think they use offsets into vtable pointers).
I am starting to think that cheat are just too hard to fight against, I am making a small, cheap online FPS, and I would let users trust each other instead, and hunt cheaters themselves, or maybe use AI like valve is doing. I would not bother have a anti cheat software.
Also players would have to manage and administrate their servers themselves.
Players would require to have a cellphone number attached, have a reputation score given by other players, maybe give an id or some other strong auth method, manual verification with like a photograph, like it's done for some dating apps. Players would have to play like 10 hours before they could play competitive.
I am confident hardcore players would be motivated to do all those things to make sure there are fewer cheaters.
If you've ever played a decent amount of basically any online game you'd know that players make cheating accusations CONSTANTLY based on very little evidence. And then there's also the social aspect of just reporting players you don't like to get them banned
In such a system you'd get way more false positives than any kind of anti-cheat
and that was before drm and anti chat rootkits.
imagine having to upgrade my pc just to run memory obfuscation sha256. whole industry is like the 80s processed food era just advertise, don't even matter what you're selling.
I am a long time CS player, but I did briefly play one of the new CoD games, before they went crazy with Nicki Minaj skins and bong-guns.
A person was so convinced I was cheating, they started doing OSINT on me while still in a match, and they found my old UnKnOwNcHeAtS account as some kind of proof that I am cheating (that account was 12 years old by that point).
I abhor cheating, and I have a lot of interest in computer science, so of course I wanted to see how all of it works and did my research during my youth, taking care to never compromise the competitive integrity of the games I played, but if you look around, there is not a single game that I can recommend to people anymore.
Games like Escape From Tarkov are so busted, cheaters are stealing the barrels off people's guns and crashing their game/PC on command.
My beloved counter-strike's premier competitive game mode has a global leaderboard that acts as a cheat advertisement section within the game.
Games like Valorant are a cut above the rest on account of their massively invasive anti-cheat, but are nowhere near as clean as most fans claim, I mean, you could write a cheat for the game using nothing but AHK and reading the color of a pixel.
There is a whole industry of private matchmaking for counter-strike, built solely on the back of their anti-cheat and promises of pro-level play to the top players.
EDIT: I found the screenshot, it was MPGH not UnknownCheats, but yeah, they also had a game ban on their account.
Second, their code for networking was complete BS, they didn't even sanity-check player movement/location server-side and many more things. Ridiculous.
I still recommend writing an HvH cheat to anyone that wants to get into proggin' -- you get a taste of both static and dynamic RE, memory-level programming, UI development, bare dxsdk (usually), a skid-saturated environment, sysadmin (if you try to set yourself up an uber1337 cheat page), and a bunch of other little things, all in an environment where you're quite directly competing with others in the same situation.
though personally I can't be that mad if you wrote cheats yourself, I will be a bit angry but impressed too ;)
i did start writing code in middle school, though. php, mostly :)
php had also been a thing of mine, I spent many months in DALnet and EFnet #php. Primarily around the time of v3 prior to v4's big launch...
*with some exceptions.
I’ve been on CS since 1.3, and i think their system is pretty good. Sure you get cheaters sometimes, but it’s not that bad, maybe I’ve been pretty lucky.
Unless you use multiple users on Windows a user space anticheat (or anything you run) can already read all your files and even memory of other processes (Windows provides an API for this), putting it in kernel adds the ability to do so for the other users. Invasiveness isn't really that good of an argument as normal software can already do so much.
Is it because normal people are out of public competitive multiplayer so you're left with the cheaters and toxic hypercompetitives?
Personally I've quit when Starcraft 2 was new. Got tired of being called a stupid noob ... when I won.
I can't remember a single multiplayer game that didn't have cheaters of some form or another. None. Zilch. Zero. It's kind of why I never grew beyond playing MMORPGs, and even that passion ultimately died out.
You added a security processor to your hardware at ring -2, but hardware vendors are notoriously bad at software so it has an exploit that the device owner can use to get code running at ring -2. Congrats, your ring 0 anti-cheat kernel module has just been defeated by the attacker's code running on your "trusted" hardware.
But in the meantime you've now exposed the normal user who isn't trying to cheat to the possibility of ring -2 malware, which is why all of that nonsense needs to be destroyed with fire.
There is no reason for a GPU or network driver, or anything to have arbitrary physical memory access.
If a GPU needs space for a draw-calls, allocate it in the kernel and explicitly give permission to the GPU to access it.
An even better example might be virtual memory. Some memory page gets swapped out or back in, so the storage controller is going to do DMA to that page. This could be basically any memory page on the machine. And that's just the super common one.
We already have enterprise GPUs with CPU cores attached to them. This is currently using custom interconnects, but as that comes down to consumer systems it's plausibly going to be something like a PCIe GPU with a medium core count CPU on it with unified access to the GPU's VRAM. Meanwhile the system still has the normal CPU with its normal memory, so you now have a NUMA system where one of the nodes goes over the PCIe bus and they both need full access to the other's memory because any given process could be scheduled on either processor.
We haven't even gotten into exotic hardware that wants to do some kind of shared memory clustering between machines, or cache cards (something like Optane) which are PCIe cards that can be used as system memory via DMA, or dedicated security processors intended to scan memory for malware etc.
There are lots of reasons for PCIe devices to have arbitrary physical memory access.
Does a GPU need access to memory of a Usermode application for some reason, okay, the GPU driver should orchestrate that.
> We haven't even gotten into exotic hardware that wants to do some kind of shared memory clustering between machines, or cache cards (something like Optane) which are PCIe cards that can be used as system memory via DMA, or dedicated security processors intended to scan memory for malware etc.
Again, opt-in. The driver should specify explicit ranges when initializing the device.
Several of those cases do indeed need arbitrary access.
> The moment a driver needs to be used to say allow an IOMMU range for a given device, the target computer has been tainted and you lose much of the benefit of DMA in the first place.
The premise there being that the device is doing something suspicious rather than the same thing that device would ordinarily do if it was present in the machine for innocuous reasons.
> Does a GPU need access to memory of a Usermode application for some reason, okay, the GPU driver should orchestrate that.
Okay, so the GPU has some CPU cores on it and if the usermode application is scheduled on any of those cores -- or could be scheduled on any of them -- then it will need access to that application's entire address space. Which is what happens by default, since they're ordinary CPU cores that just happen to be on the other side of a PCIe bus.
> Again, opt-in. The driver should specify explicit ranges when initializing the device.
What ranges? The security processor is intended to scan every last memory page. The cache card is storing arbitrary memory pages on itself and would need access to arbitrary others because any given page could be transferred to or from the cache at any time. The cluster card is presenting the entire cluster's combined memory as a single address space to every node and managing which pages are stored on which node.
And just to reiterate, it doesn't have to be anything exotic. The storage controller in a common machine is going to do DMA to arbitrary memory pages for swap.
> And just to reiterate, it doesn't have to be anything exotic. The storage controller in a common machine is going to do DMA to arbitrary memory pages for swap.
I'd like a source for that if you have one. I'd be very surprised if modern IOMMU implementations with paging need arbitrary access. The CPU / OS could presumably modify the IOMMU entries prior to the DMA swap. The OS is still the one initiating a DMA transaction.
If the "put some CPU cores on the GPU" thing becomes popular, probably a lot.
> I'd like a source for that if you have one. I'd be very surprised if modern IOMMU implementations with paging need arbitrary access. The CPU / OS could presumably modify the IOMMU entries prior to the DMA swap. The OS is still the one initiating a DMA transaction.
Traditional paging implementations didn't use IOMMU at all -- a lot of machines don't even physically have one, and even the ones that have one, that doesn't mean the OS is using it for that. It might end up going through it if you have something like the storage controller is mapped as a device to a VM guest and then the host uses the IOMMU to map the storage controller's DMA to the memory pages corresponding to what the guest perceives as its physical memory, or things along those lines.
But remapping the pages for each access, even if theoretically possible, would be pretty expensive. Page table operations aren't cheap and have significant synchronization overhead, and to swap a page that way would require you to both map the page and then almost immediately do another operation to unmap it again. For each 4kB page, since they're unlikely to be contiguous. You can do the math on how many page table operations that would add if you were swapping in, say, 500MiB, which a modern SSD could otherwise do in tens of milliseconds. Notice in particular that this would make operating systems that do this get lower scores in benchmarks. And that this applies not just to swap as a result of being out of memory, but ordinary file accesses which are really a swap to the page cache.
You could also run into trouble if you tried to do that because the IOMMU may only support a finite number of mappings, or have performance issues if you create too many. Then you get a slow device with too many pending I/O operations and the whole system locks up.
And even if you paid the cost, what have you bought? The OS could still give a device access to any given memory page for legitimate reasons and you have no way to know if the reason was the legitimate one or the user arranged for those circumstances to exist so they could access the page.
If I put a hardware connection to the memory (basically WIRES to my memory bus) then yes, it's very hard to detect. But that's also very hard and expensive to do...