The main solutions we have today are IP ban + VPN blocking using a database of known VPN subnets and adding them all to the firewall, and a similar fingerprinting technique which scans their folder structure of certain system folders.
https://redman.xyz/doku.php/schachtmeister2 was made specifically against people using VPNs.
It was made for Tremulous (ioquake3 fork) where people kept evading IP bans, but it can be used for any other games.
It is not my project, but I know the author, and I could personally fork it and make it suitable for specific (or any) games if there is demand for it.
You may also use heuristics, too, in schachtmeister2:
whois -10 "Hosting"
whois -10 "hosting"
whois -7 "Server"
whois -4 "server"
whois -10 "VPS"
whois -13 "VPN"
whois -3 "Private Network"
whois +7 "residential"
whois +7 "Residential"
whois -20 "Dedicated Server"
Edit: I noticed that the git repository returns 502, contacted the maintainer.Even without this IP bans only go so far as they're both easily swapped (VPN offers, or rent a VPS to forward traffic, or even by design with an ISP handing out dynamic IPs on router reboot) AND overreaching:
- NAT: ban household / campus
- CGNAT: ban whole neighbourhood
- IPv6: ban whole /64 => whole household (because of SLAAC + random privacy addresses)
You are very welcome on their Discord server, or Der Bunker's[2] (Zittrig) Discord server, too!
We may know each other. :)
[1] https://unvanquished.net/ (https://unvanquished.net/chat/)
I don't play online anymore because I get destroyed but it's still fun to pop in for a quick match against AI when I have 30 minutes to kill.
Since the server owner insists on allowing non-steam accounts (pirated copies) to connect we can't rely on SteamID bans, similarly to GUID in Unreal. It's a bit trickier to change the spoofed ID as I assume it's buried deep in the game install somewhere obscure, but still possible. It's actually a very popular game in northern Africa, the former Baltic states and surrounding areas as well as north and west Asia: without these players the server would be a ghost town.
Anyway, our approach is twofold carrot and stick style: Steam players get near instant reloads and immunity to some of the more "enthusiastic" automodding/kick features: so for the price of a handful of VPN keys you can get a legitimate, allowed advantage over most of the server population as well as reserved username and "VIP" tag, plus you now own the game. Seems a great way to do it, as it's available to anyone instantly for that one time fee (which goes direct to the game dev), or for free by playing at least 1 game a week for 5 weeks, then contacting the mod team on social media.
The other side to that (the stick), is that rather than simply kick/ban the player we usually take some time to have fun annoying them, to show them they're really not welcome, and make them actively not want to come back.
Disarming them then giving F tier weapons, a few random teleports out of bounds or stuck in the floor, repeat amx_rocket to turn them into a firework, amx_drug to max out FOV and add "drunk" effect, and ofc a bit of teasing about what a lowskill looser you must be to have fun while AI plays the game for you.
There's also "illegal" amx plugins and commands, which are generally frowned upon and extremely abusable, but quite useful in these situations. My favorite (which most of the "illegal plugins" are based around) is amx_exec which essentially gives admins direct access to any client's in-game console, to run any command or set any setting!
It's actually kind of terrifying that exists. For example this set of commands sets network baudrate to 1000 (that'll be fun for the cheater until they notice), changes name, wipes all keybinds, then binds the default chat key to close the game, while setting max FPS low enough to be bothersome without being obvious! There are pre-built macros that do far worse to your settings too: although easily fixable by deleting to restore defaults, would be very frustrating if you hadn't backed up your config files.
amx_exec cheatername "rate 1000" amx_exec cheatername "name iCaNtAiM" amx_exec cheatername "unbind all" amx_exec cheatername "bind y quit" amx_exec cheatername "fps_max 50"
On an intriguing side note: Many servers charge for VIP advantages, to the tune of up to $20/month! At first I thought this pretty shocking, until I found out that there's some kinda shady clique where to be listed in a reasonable spot on 3rd party server browsers, a hefty fee is required, and a significant proportion of this income gets spent on "boosts".
When our server owner stopped paying for "boost" for two months, mean player count dropped from 14/32 to 3/32, and max players from a regular 28/32 on weekends, to 12/32 on a Friday night if lucky. The player count rocketed as soon as the owner started paying again... but the crazy thing is it's $180/month!
Before getting involved with moderating, I thought running a fun, deathmatch, well moderated, low ping, high performance server dedicated to remakes/remixes of the 2nd most popular map in the game would be enough to be popular/busy. But no, apparently you have to pay extortionate fees to incumbent gatekeepers, if you want your server to be visible to the majority of the playerbase!
Yes, we have something similar for UT2004, but only a handful of people are even aware it exists. It's too powerful and too easily abused. I have yet to share it, even with other admins.
I used to administrate CS 1.6 until a few years ago. I got a question concerning amx_exec. I thought cl_filterstuffcmd basically killed any usage admin slowhacking?
or is it that most nosteam cs 1.6 client have it set to 0 ?
So an GUID accumulates reputation after some amount of play in the provisional server. If you get enough reputation by not cheating, the GUID gets whitelisted for the "good" server. You can have multiple tiers, so the really good/fun people get to the third of fourth tier of demonstrated non-cheating.
If they cheat, get banned, they need to climb the tiers with GUIDs again. The cheaters will want to cheat, they won't want to pay the dues. The legit players will happily try to get to the second and third tiers, so you could probably just require 1 hour of not-cheating for the first tier of server, and then maybe 8 hours to get to the third tier.
You could shadowban/honeypot after the first tier, so you shut all cheaters that you detect to their own cheater server where the cheaters can all get shunted to.
Is latency going to be good enough on mobile data (especially if they're also using proxies) for a FPS, though? Sure, they're using cheating software, but I wouldn't be surprised if the software gets the information it needs to cheat too late often enough for it to be useful.
Sophisticated cheats in games like CSGO (and other competitive shooters) are usually very subtle, such as displaying enemies on the mini-map when they shouldn't be visible which provides a major advantage without requiring superhuman input, and the added latency is often negligible—especially when the info can be relayed to teammates and now you essentially have the entire team cheating with only 1 player suffering from a bit of increased latency.
And I wouldn't say this is an edge case either as in my experience the majority of cheaters I encountered are individuals that play on an alt account and offer a service to guarantee wins in ranked games.
Even for non-obvious use-cases, it's hard to beat the advantage provided by knowing the position of players.
On my own hotspot, I have less than 30ms of latency.
I got to Supreme (2nd highest rank) with 150 ms ping. The people I queued with hit Global.
It's possible to play legitimately with very high ping. The higher ping put us at a disadvantage, but the skill gap between regions made it worth it to arbitrage.
In practice this means at lower ranks, it was not at all uncommon to be matched with players with similar rank but vastly better skills.
It's basically impossible to keep one's rank at Supreme if you only play against Gold Nova or so due to the way the rating system works.
Regular IPs can post freely
VPN or mobile IPs (blacklisted) must pay for a key ($20/year) that allows posting from blacklisted IPs. Key is good for posting from one blacklisted IP, locked for 30 minutes, so users cannot share keys. That way, you can ban the user by their key, if their IP is public.
It's not a perfect solution but it seems to be the best they've found for such a situation so far.
EDIT: Well, I guess the tribe has spoken. Pretty surprising. I think y'all are just assuming you'll always be the ones with the "good" IPs...
What do they do in such cases?
Assuming they get the report after the fact and assuming their "no logging" promises are true, can they even do anything? They're not even supposed to know which customer did it, after all.
If their promises are false, wouldn't they reveal their hand if they handed logs over willy nilly?
On some Japanese BBSes, spammers tend to use non-Japanese IPs or data center IPs. A good chunk of the spam goes away by blocking non-Japan IPs (easy to do with BGP data) and disallowing data center IPs (these often host VPNs, scrapers, etc.) from posting.
Posting from overseas thus costs money or is not possible. The trade-off is 1-100 extra users or significantly reduced spam for little effort. It's not surprising that most website operators choose the latter.
I also know of a file uploader that recently had to block overseas IPs due to such IPs repeatedly uploading illegal content. This is an example of a few bad actors ruining things for everyone.
Blocking IP ranges by country or ISP is pretty much always going to have to exist as long as certain countries and ISPs turn a blind eye to abuse.
Even with as poor a solution as IP blocks are, it's the best we have and alternatives seem worse.
Anyway, it's a tradeoff between dealing with bad actors effectively and not impacting common users. There's a lot more bad actors than common users running into those sorts of IP bans though.
Cheaters, which is why they’re getting banned in the first place
IIRC you also need one when playing from some countries, whether due to legal reasons or server restrictions.
I was going to mention how much I loved that game, until I realized I played UT99. Time sure does fly...
Game companies invade our privacy and destroy our computer freedom with ineffective malware tier rootkit solutions only to fail to solve the problem in the end. Their business model depends on enabling people to play with any random from anywhere in the world. They are forced to trust untrustworthy clients. The truth is people should not allow their computers to talk to strangers.
People should be able to play with whomever they wish.
Without a whitelist, it's only a matter of time before an actual cheater joins their server and ruins their fun.
Enumerating badness just doesn't work.
No. VPN blocking is useless to stop malicious actors as most residential connections have DHCP and VPN subnets are added and removed somewhat frequently, it's not that hard to find a "undocumented" one. It also completely excluds anyone using a VPN for non-malicous purposes.
Scanning files and folders is just ridiculous, not only an incredible invasion of privacy, but also trivial to work around.
Yes it doesn't "solve" the problem, and yes it removes some legitimate users, but it's by no means useless. Given the tradeoffs involved I'm not at all surprised it's so common.
If you have a solution that's less invasive (e.g., some businesses can get away with not providing anything expensive till after a payment has cleared the normal fraud window, and many businesses don't have obscene levels of malicious traffic; in those cases you can just let bad traffic run rampant and ignore it till it's a problem) then that's probably better, but blocking VPNs or whole countries or whatever can be the difference between a successful business and bankruptcy.
the cheats are software, software has certain quirks, like the way it aims or the way it tracks. And I'm willing to bet it has enough distinctiveness from human aiming to be classified. Couldn't a classifier work on the behavior of the cheating software itself, rather than use IP bans?
Which might be something you could guarantee, if the game were locked to wimpy console hardware; or if the game had minimal CPU physics such that it was effectively never running CPU-bottlenecked and there were massive gaps in frame-time where even the client CPUs are sitting idle, that a server running in lockstep could cram that kind of analysis into.
But gaming is a race-to-the-top, hardware-wise. The CPU in a gaming rig might not have as many cores as your average server CPU, but it's almost certainly going to have higher single-core perf.
And part of the reason for that, is that games really do try to use your whole CPU (and GPU), with AAA studios especially being factories for constant innovation in new ways to make even the minimum requirements just to run a game's physics, higher and higher every year.
And if the server can't do "faster than realtime" analysis of the streams of inputs of the players, then by queuing theory, it'll inevitably get infinitely backlogged — the server will keep receiving new analysis work to do every timestep, and will fall further and further behind, never catching up until new work stops being generated — i.e. until the match is over. And then it'll have to probably sit there for five more minutes thinking really hard before spitting out a "hey, wait just a minute..." about any given match.
Which is fine if there's a big central lobby server that the game is forced to connect to, and your goal is to ensure that some central statistic that that central server relies upon (e.g. match-rank ELO) gets calculated correctly, such that cheaters are prevented from climbing the leaderboards / winning their way into high-ranked play. (And that's exactly the situation the big eSports games companies are in.)
But in the context of older games that use arbitrary hosted servers and random-pairing (or manual lobby-based match selection) — or in modern, but "dead", games, that only persist due to being modded to accept private servers — this "after-the-fact" punishment is useless, as most servers have no incentive to do this analysis, especially when cheaters can just hop around between servers. So there's nothing preventing people from being matched with cheaters, sometimes over and over again, if the cheaters can just tell their clients to roll up with a new key+IP for every match.
...and that's assuming there even are servers. You can forget about any of this working in a p2p context. (Think about what a Sybil attack means in the context of a federated set of individual tiny disconnected p2p networks.)
Also simple analysis of only the input streams as you stated really doesn't have to do with the phys rate of the game server and should be alot cheaper computationally. It can be offloaded to another process even if it was found to be too impactful to run alongside the game server directly. Something all those extra cores might be good for.
a) run on 2nd pc passively capturing the screen and commands to a fake mouse device plugged into both machines,
b) "humanise" the aim with ai models trained on professional players
c) add random variances within the limits of human reaction times
So it doesn't solve things, really it'd still be playing catchup.
That being said, the vast majority of cheats are not that sophisticated. "Simple" analysis of player input should still be used to make low effort cheats less or ineffective. Especially if used to compare consistency of mechanical play by a player. I doubt most cheaters want to just turn on a full bot that plays by itself for the whole game. You can build a model of play customized for an individual player to look for changes in mechanical skill during critical plays. Then even if that was incorporated into the cheat client so that its 'actions' can't be definitevly detected against the players baseline, it would at least be limited to cheating as that player always playing like it's their best day. Either that or the cheater would have to go fully hands off for that account which I imagine is not as appealing for most cheaters.
Input analysis, even much simpleler approaches, can still be a valuable tool to make cheating more difficult and less opportunistic. The goal would be to raise the barrier of entry to cheating without immediately getting banned beyond downloading and running a client. If people who consider cheating in a game have to: order, wait for, and setup additional hardware then aquire models trained for the latest version of the game that are also trained on pro play in a way that lets the cheating be humanly plausible to remain undetected; it will reduce the total number of people who cheat in that title. Will needing to aquire additional hardware stop all cheating? No, I had a friend as a kid that owned a GameShark that I used and ended up corrupting the save on one of my Pokemon games. But if all of that is what is required to be able to successfully and consistently cheat, it will raise both the cost of development of cheats as well as their price to cheaters.
For top level professional play, in person tournaments on managed setups will remain the gold standard for the forseeable future (and besides they are attractive as events for their own sake). And for the rest of us, we will continue to be trapped in the labyrinth with both the cat and the mice.
Edit: If you don't think this is an issue I urge you to Google "pokemon go belgium ip ban" for a fun rabbit hole.
You quickly run into the same kinds of problems you do in v4 though; most users have access to a shared pool of addresses, and you may need to ban the whole pool to ban an abuser, but then you also ban everyone else in that pool, and the abuser is more likely to have ability and motivation to use other pools.
It's better if you have multiple factors... if you don't like the IP, don't ban it, but be stricter on other measures, etc. So a well behaved client from a 'bad ip' can still play, but enough suspicious things and you can't play anymore.
> An example of an IPv4 IP address is 198.51.100.1.
I also love games with community ran servers for the same reason
https://en.m.wikipedia.org/wiki/The_All-Seeing_Eye which was sold to yahoo.
Yahoo was a powerhouse back in the day and one that google offered to sell to. The world would be so different if it had.
Because they're 10 years behind the curve and don't understand that a game's lifespan is contingent on anti-cheat. Once it becomes clear to the casual player that a hacker is going to effect every gaming session, the game dies quickly. Many games have gone so far as to obfuscate the presence of hackers so that players are less likely to notice them (CoD)! Other games build from the ground up with anti-cheat in mind (Valorant). Other games have an ID verified 3rd party system for competitive play (CSGO).
Personally, I think there is a middle ground between root level hardware access, and treating cheating as an afterthought. I'd lean more heavily on humans in the process... Use ML models to detect potential cheaters, and build a team of former play testers to investigate these accounts. There is zero reason a cheater should be in the top 100 accounts; An intern could investigate them in a single day! More low hanging fruit would be investigating new accounts that are over-performing. I'd also change the ToS so legal action could be persued for repeat offenders. Cheaters do real economic damage to a company, and forcing them to show up in small claims court would heavily de-incentivize ban evaders. This probably sounds expensive and overkill, but in the grand scheme of things it's cheap; it could be done on the headcount budget of 2-3 engineers. It'd also be a huge PR win for the game.
Or you could spend a huge effort on cheatproofing only to find that no-one plays your game in the first place, e.g. Concord. I imagine getting cheaters in your game often falls into the "nice problem to have" category and it is easy to kick the can down the road.
How does CoD accomplish this, or other games that use similar strategies. I can't wrap my mind around how you could do this effectively while also not identifying hackers for the purpose of banning. Banning = Cheater buying another license to the game, I thought they like banning people for that reason :/
Ha ha, you mean paying for the game and holding your Steam account as collateral?
The only trace of it is that your account profile will show that you have vac bans on record, but you don't have to show your profile.
https://help.steampowered.com/en/faqs/view/571A-97DA-70E9-FF...
> Q: Can I use bans in other games to block users from playing in my game?
> A: No. VAC and Game bans should only prevent the user from playing on VAC secured servers in the game they received a ban in. A permanent ban should only be issued for your game if the user was caught cheating in your game.
https://partner.steamgames.com/doc/webapi/ICheatReportingSer...
It's complicated. Valve has conflicting guidance on this. What is Valve's actual position? The 13 year olds who cheat also buy IAP. In their opinion, if there are a lot of cheaters, sell pay to win items.
Otherwise, the consensus is hellbanning, meaning putting all the cheaters together in a server, and VAC queries are used to achieve that.
One was from letting my friend use my steam account, I wasn't using it and when I wanted to use it my password was changed and I had a vac ban in CS 1.6. He said it wasn't him, I'm not convinced.
The other was in Dungeon Defenders. The game had a confusing policy where you were allowed to cheat on the "Open" servers but not on "Ranked". You could copy your stuff from ranked to open, so I copied it and used cheat engine to test some things. Turned out you were only allowed to cheat using mods from the Steam Workshop or something like that, so I got vac banned.
Both bans are over 10 years old so things might have changed but I have never noticed any negative effects other than obviously I can't play DD or CS 1.6 online.
The cheating server situation is a similar concept to hell banning but poorly executed.
Hell banning is the status quo. If you try to play Overwatch they probably query VAC and might match make you with other people with VAC bans.
It’s hard to know without working for the game studio.
There is no hard technical solution to preventing cheating for many games. It depends how you describe insurmountable DRM or anti piracy measures, such as operating the only copy of the game’s backend server code. If people have no viable alternatives to playing on your remote servers, then you have an anti cheat solution. The net result is that all games, in a Darwinian way, start to look like this. Similarly on PS5, you cannot pirate their games practicably, so there is a vibrant single player business.
It all goes back to: are the only valid limitations on users insurmountable DRM? If we enforced copyright infringement in this or any country it would be a different story.
Seems strange that they would discriminate based on vac bans in game but not for the people selected to judge others. Then again maybe my bans were too old.
It's so much easier to make cheats today than it was, say, 10 years ago.
It's also easier because more and more games are sharing common infrastructure like game engines, as compared to the past. What works in one Unreal game may save you a lot of time developing a cheat for another Unreal game.
These days, many online games encounter serious cheats within the first couple of days of release - if not the day OF release.
Cheating and anti-cheat used to rely a lot on the pure technical parts (like "is something sneaking some reads from the memory the game engine uses to clip models?"), which is ultimately not something you will win as a game developer (DMA/Hardware attacks or even just frame grabbing the eDP or LVDS signal and intercepting the USB HID traffic has been on the market for quite a while).
But implausible actions and results for a player can only be attributed to luck so many times. Do 30 360noscope flick headshots in a row on a brand new account and you can be pretty sure something is wrong.
If we can get plausibility vs. luck sorted out to a degree where the method of cheating no longer matters, that's when the tide turns. Works for pure bots as well. But it's difficult to do, and probably not something every developer is able/willing to develop or invest in.
Anything that makes assumptions about player's skills runs into problems too. For any online PvP game, the skill ceiling will rise with time. What once may have been considered improbable may soon become what's consistent for the top 1% or even 0.1% of the playerbase given a few years.
As well, it can run into problems as rebalancing occurs and new abilities are released.
But if the anti-cheat is able to advance to the point that a cheater can merely rise up the ranks by 10%, then, if you think about it... in a lot of ways the problem is solved. When I'm playing in a match, and one of the players is in the 80th percentile by their own merits, and another is "naturally" a 60th percentile player but is cheating their way up to an 80th percentile player somehow... and if they can't see through walls or insta-headshot across the map or do anything other blatently violating the rules, they just play a little better... what's the actual difference?
There is some. It's not zero. If you can't get those cheaters under control in tournament play the situation will normalize to everyone using a cheat just to keep up in a Red Queen's race, and that's still bad for other reasons.
But it isn't the same impact as playing with Sir Snipes-A-Lot who headshots you through three walls the instant your spawn invulnerability wears off, either.
The only group you'd punish with that is skilled players that lose their account (and create a new one), but if you use a moving skill window they can grow back into their plausibility pretty quickly, and it's a small cost compared to everything else. And you could even mitigate that by making things like the first 10 matches require a different plausibility score than the matches after that.
And with different I don't mean "no scoring at all" or something like that. But a cheater tends to not cheat "a little bit". You might have togglers, but that sticks out like a sore thumb (people don't suddenly lose or gain skill like that). And even if that fails (lots of "cheating a little bit" for example), you've still managed to boot out the obvious persistent cheating.
And that's just with 1 example and 1 scenario. Granted, that bypasses the fact that it is still difficult and doing it broader than one example/scenario is even more difficult, but that's why I ended the previous comment pointing out the difficulty and associated cost, which goes hand in hand with the balancing difficulty you pointed out. Even tribunal-assisted methods (not sure if Riot games still does that) have the same problem.
And - what about experienced players who cheat?
In some scenes, it's actually more often that cheaters are some of the best, most experienced players who have a strong competitive lean and feel they 'deserve' to win, so use cheats to get an edge. It's far more common than you'd think.
That's the problem with any anti-cheat system. It's all the what-ifs. Every single 'clever idea' that has been theorized under the sun has been tried and most have failed.
Experienced players who cheat will still be subject to plausibility. Say there is a normal amount of variance in humans but suddenly some player no longer has variance in their action. That's not plausible at all. Or a player looking at things they cannot see, that might sometimes be a coincidence, but that level of coincidence is not plausible to suddenly change a drastic amount.
Again, this sort of thing doesn't catch all subtle cheaters, but those are also not the biggest issue. It's the generic "runs into a room, beats everyone within 10ms", and "cannot see, but hits anyway all the time" type of cheat you'd want to capture automatically.
A what-if in a tournament or the top 1% of players is such a small set of players, you'd be able to do human observation. Even then someone could cheat, but you're so far outside of the realm of general cheating, I wonder if that's worth including in a system that's mostly beneficial inside the mass market gaming players.
Either way, this sort of detection is usually done in the financial and retail world, and results in highly acceptable rates and results. It's not perfect with a 100% success rate or something like that, but it's pretty successful. Just not something studios or publishers seem to want to invest in. It's much simpler to just buy or licence something (like Easy Anti-Cheat). Broad internal expertise isn't something the markets are rewarding at this point.
You cannot and should not rely on that, depending on what account really means, e.g. in ioquake3 games, having a new GUID (you delete a specific file to get a new one) makes you a new player.
> A smurf is a player who creates another account to play against lower-ranked opponents in online games.
Happens in many games, including League of Legends on which people typically spend a lot of money.
I suppose that matters less if we're doing checks on the actual data, but for the player base, you cannot rely on what the game reports about the experience of your opponent, which makes for very confusing matchups (and the accusations that go with it).
Like level up without getting XP by playing? That renders it pretty useless.
Speaking of, I hate games that are "pay to win".
0: https://www.ign.com/articles/final-fantasy-14s-latest-raid-s...
But I guess the documentation and standardization are even more advanced ?
1. Determine minimum human reaction times and limit movement to within those parameters on the client side. (For example a human can't swing their view around [in a fps] in a microsecond so make that impossible on the client) this will require a lot of user testing to get right, get pro players and push their limits.
2. Build a 'unified field theory' for your game world that is aware of the client side constraints as well as limits on character movement, reload times, bullet velocities, etc. Run this [much smaller than the real game] simulation on server.
3. Ban any user who sends input that violates physics.
Now cheating has to at look like high level play instead of someone flying around spinbotting everyone from across the map. Players hopefully don't get as frustrated when playing against cheaters as they assume they are just great players. Great players should be competitive against cheaters as well.
Take a moment and think about how you would design cheats that would be undetectable. Hot keys, real time adjustments, all the options and parameters you could provide cheater to dial in their choice experience while also keeping them looking legit.
Then realize cheat developers thought of all that decades ago and it is waaayyyy beyond what you can dream up in a few minutes. Hell cheats nowadays even stop cheaters from inadvertently doing actions that would out them as cheaters.
The problem isn't cheating itself, the problem is players feeling like they have been cheated (and thus not buying micro transactions in the future).
If you can limit player action to things that look plausibly human, less players will feel cheated and will be less likely to drop out.
This system would be put in place on top of existing systems and if implemented as I have described could be done so fairly cheaply from a operational perspective (getting it off the ground will require a good bit of dev time).
If you had ELO based matchmaking (that dropped matches where the player performed far below what they had previously done to prevent sandbagging) a cheater with "perfect play" would end up only playing against other cheaters after a time.
Any game I pay for that pressures me to pay with micro transactions already makes me feel like I've been cheated. "Free" to play games might be motivated that way though.
Although I doubt it would stop cheating, making sure that players can't do impossible things is absolutely a good idea and something that should have been done ages ago.
The best solution to avoid cheating is to play with people you know. Expecting a good time when playing with internet randos from all over the globe is maybe too optimistic.
Yeah, most games have builtin aimbot, called "aim assist". I do not like it, in fact, I find it annoying as a player, too (I come from Quake 3).
No, those are still just as vehemently hated as “closet cheaters”, for example the whole XIM / Cronus infestation on any game that has controller AA.
It’s still possible to, on average, spot if it’s a closet cheater or an actual good player due to things like movement and gamesense, but for the average player it will be much less obvious, leading to a huge amount of rage towards good players because they are by default suspected as “just another closet cheater.”
A CS:GO player with good gamesense will habitually keep their crosshairs at head height and aim at corners where an enemy is likely to emerge. They'll have an intuitive sense of how long it takes to run from one point on the map to another. They'll listen through walls for footsteps to try and decode where the enemy are, where they're headed to and what strategy they might be about to attempt.
To the uninitiated, it looks a lot like cheating - you peek through a window and instantly get headshotted before you've had any chance to react. To the guy who hit you, it's just basic gamesense - you did a predictable thing and he punished you for it.
You just described most competitive games (even vaguely so), and 100% of esports.
There's also the multi-world randomiser community, where people network a bunch of emulators together, and finding an item in one game can actually unlock something else in another player's game.
Demomen on the other hand use an aimbot so they can hit you with those parabolic projectiles in the face, even if you're behind a wall and they can't see you at all.
Hilarious, and shitty.
What I would try is to hire a red team & blue team and put them in a sandbox environment. The red team cheats on purpose. The blue team is guaranteed to be playing legitimately. Both teams label their session data accurately. I then use this as training & eval set for a model that will be used on actual player inputs.
The only downside is that you will get a certain % of false positives, but the tradeoff is that there is literally nothing the cheaters can do to prevent detection unless they infiltrate your internal operations and obtain access to the data and/or methods.
Now add in that I'm running a physics-heavy game with 120 tickrate, (considering higher after more tests), with fine motor control action combat, aimed to scale to mmorpg size, and it really becomes a challenge!
The world is much more complex now that YOLO-based aimbots exist, and I think the real answer is that anti-cheats are now defeatable, period.
You can craft a private binary that has no hash registered to any major anti-cheat service on the client-side, and on the server-side you’re limited to what is allowed by game rules.
Since there’s no mechanisms for preventing super human reflexes, and there probably shouldn’t be, it’s an issue that cannot be solved anymore.
So you need community judgement, and that too is boring. Good players being accused of cheating in Counter Strike is a years old and entertaining problem.
the what ?!?
Sorry :'( I didn't expect the post to get this much traffic.
IMHO, one of the most effective way to stop ban evaders is to actually charge money for the game.
I'll take a ~99% cheat-free experience over not having any improvement at all.
Cheaters are NOT price sensitive. This is their preferred form of entertainment, ie being a king in their little kiddie pool, so they don't care to spend $60 every month on a new account/gamekey/whatever you charge them.
People in CS:GO are perfectly happy to be banned with hundreds of dollars of skins in an account, because they either spent like $5 getting someone else's compromised account, or they are paying $30 a month to a cheat service anyway.
I bet there is a shit ton of overlap between frequent cheaters and pay-to-win whales.
The reliable way to make people cheat in your game less is cheater honeypots. Instead of banning and just starting the hunt for a cheater all over again when they buy a new account, you silently force them into matchmaking with only other cheaters, purposely abusive bots, or artificially harming the cheater's gameplay like with fake lag, or just ignore keypresses sometimes. Ruin their fun and they will stop ruining your game. Then you turn the adverse knowledge game on them, they have to figure out if they are regularly playing with cheaters or bots in order to know they need to buy a new account.
I think that was us! We ended up combining it with other fingerprinting indicators, but the whole 'use VGUI' was a surprisingly effective way at handling this. I believe they removed the web browser in ~2018, which was disappointing. Being able to have custom skill trees / fun integrations with servers was really powerful!
It's trivial to decrypt HTTPS with tools like Fiddler or Burp Suite, assuming this build in browser used system proxy and system certificates list.
Given that the way of circumventing the issue at hand is to delete a single local file, which is far simpler than finding the actual request and setting up fiddler or burp suite, this worked good enough.
No need to overengineer.
It's also worth noting this is a 3rd party dedicated server provider, who manages and leases community run game servers. Getting a ban here would prevent you from playing on that provider's servers, but not any of the official matchmaking ones or servers from another hosting provider.
They implemented some specific exceptions but generally recommended to not play on untrusted networks to avoid getting banned along cheaters in the same network.
That's my take from the article.
although it has to be said that we are better off without having vgui in the first place.
this kind of sneaky tracking is so widespread today on the Web that it is nearly impossible to be bothered with evading it. whether it is the "wideport" or what extensions you use, you might as well use tails to surf the internet at that rate.
but using a logical fallacy, to exploit for the better good does seem appealing.
I know you’re joking, but if you had filed a patent you would have had to reveal the trick, thus rendering it immediately useless.
Doesn’t detract at all from your post. Fun read.
It's crazy how rampant cheating in multiplayer games, especially competitive ones has gotten. Ten years ago, I thought it was at an extreme, but it's only gone up since then.
Part of the problem is that for some software developers, writing cheats brings in a massive amount of money.
So instead of some teenager messing around making unsophisticated cheats, you have some devs that are far better at writing cheats than game developers are at preventing them.
It doesn't help that game devs have to secure everything, everywhere, but cheat devs only have to find a single flaw.
Which seem to be exclusively FPS games with ~10+M players ?
I don't even remember the last time when I've heard of a game outside that very narrow (albeit decently popular) category to have complaints about cheaters. Meanwhile for these games, I hear about it like every month, and all this despite this genre being amongst the ones that I play the least !
Escape from Tarkov comes to mind. An extremely hard and niche first person shooter with RPG elements. It is a private Russian company so we don't know exact player numbers, but it is estimated to be ~200k by some hits in a google search.
There are people who will provide carry services and guns and gear for plenty of people who will pay for it, as well as other providers selling the cheats that the carriers use for a weekly fee. The people who are providing these services are getting paid in USD when their local currency has a far lower value. It isn't a moral thing, it is a money thing.
You know that you sometimes don't know a bug exists before someone exploits it or uses your software in a way that you did not think of. There are experts who stand to make tons of cash if they can create or use an exploit that people will pay money to advance with.
The only way to prevent this is something that no one wants to hear, but it needs to be a unique citizenship identifier of some sort, since HWIDs and other means of tracking are mostly useless.
FPS games are kind of the gold standard when it comes to competitive environments, and thus gather cheaters or people complaining about cheaters substantially more than most other game genres.
Mind you I don't know if that's the case on privately hosted servers as well, since those could be manipulated to give players the points needed to get the lootboxes.
Not that there isn't options of making money that do benefit from cheating. Like creating high ranking accounts to sell. Which some people buy for the status of the rank...
If you have access to the game's memory etc, it's pretty easy to create an aimbot or thing that lets you see thru walls et cetera.
How you gonna cheat in a moba? It's a strategy game, you need, like, cutting edge AI to beat the best humans at it. In fact OpenAI specifically worked on an AI to play Dota 2, it was that hard.
A: laziness and cost. It just doesn’t matter the same way that baking code matters, I guess.
So they toss on some cheap anti cheat instead of architecting it safely (expensively.)
- Almost all games are going to use a licensed or shared game engine. That means the softwsre architecture is already known to skilled cheat developers with reverse engineering skills.
- Obfuscating the game will only go so far, as demonstrated by the mixed success of Denuvo DRM.
- The game will not be the most privileged process on the machine, while cheaters are glad to allow root/kernel access to cheats. More advanced cheaters can use PCIe devices to read game memory, defeating that mitigation.
- TPMs cannot be trusted to secure games, as they are exploitable.
- Implementing any of these mitigations will break the game on certain devices, leading to user frustration, reputation damage, and lost revenue base.
- And most damning, AI enabled cheats no longer need any internal access at all. They can simply monitor display output and automate user input to automate certain actions like perfect aim and perfect movement.
> Obfuscating the game will only go so far, as demonstrated by the mixed success of Denuvo DRM.
Denuvo is for the most part DRM, rather than anticheat. It's goal is to stop people pirating the game during the launch window.
> The game will not be the most privileged process on the machine, while cheaters are glad to allow root/kernel access to cheats.
This ship has sailed. Modern Anticheat platforms are kernel level.
> TPMs cannot be trusted to secure games, as they are exploitable.
Disagree here - for the most part (XIM's being the notable exception) cheating is not a problem on console platforms.
> AI enabled cheats no longer need any internal access at all. They can simply monitor display output and automate user input to automate certain actions like perfect aim and perfect movement.
I don't think these are rampant, or even widespread yet. People joyfully claim that because cheats can be installed in hardware devices that there's no point in cheating, but the reality is the barrier to entry of these hyper advanced cheats _right now_ means that the mitigations that are currently in place are necessary and (somewhat) sufficient.
Antiecheats work in layers and are a game of cat and mouse. They can detect these things some times, and will ban them (and do hardware bans). The cheaters will rotate and move on, and the cycle continues. The goal of an effective anti cheat isn’t stop cheating, it’s be enough of a burden that your game isn’t ruined by cheaters, and not enough of a target to be fun for the cheat writers.
so you use a kernel level anti-anti-cheat
The nature of FPS games means only environment integrity can stop cheating. It's not exploitable per se. Just the game skill can be done by a computer perfectly.
Conversely who knows how long it will take for AIs to play Hearthstone with never-before-seen-cards well.
- GOOD software are simple and easy to understand, which makes it EASY to cheat.
- BAD software are needlessly complex and finicky, so it's HARD to rig it for a cheat.
- Anti-cheats intentionally make software BAD and over-complicated, so cheaters would have hard time modifying it. But computers are brittle and also aren't smarter than humans so cheaters will eventually find a way.
- Security is completely irrelevant topic since game clients are "bought" and run on your hardware; Digital Restrictions Management built to work against you as user is anti-consumer, anti-right-to-repair, anti-human, super bad thing, and lots of efforts are made to keep PC away from it as much as practical.
It has nothing to do with laziness or cost. If anything it'll be the best programmed game that gets hacked fastest. And PS2 that gets emulated last.
And cheats do not always rely on exploitable bugs. A bot using screen capture and input device emulation works at the OS level and in other contexts (ex: accessibility), it would be a legitimate thing to do.
Some games aren’t able to prevent cheating. The client has the data on where the enemies on their screen are. The cheat only needs to move the mouse and click on the enemies heads. Other games like MMORPGs involve the cheat just playing the game and farming on behalf of the player.
It just becomes a cat and mouse game where the anti cheat is trying to detect something hooking into the game process while the cheat tries to hide itself.
From a player perspective that's not cheating, that's running a bot. It's automation of a routine grind - which is typically designed to make players hate it and spend money instead. Automating boring stuff is simply natural.
For pay-to-win games it's effectively a balancing system, a pushback against player-hostile mechanics. Not unlike an adblocker on the web.
That's strictly in context of MMORPG genre, of course.
When you have software running locally, you can arbitrarily modify how it runs.
Like an aimbot is a powerful cheat, and there's no amount of security that can prevent one from being used outside of an anticheat being able to look deep into what your system is doing, what it contains. The only way to prevent that kind of thing is to remove your control of your own computer.
Me with a pen and paper exceeds many human capabilites.
Likewise with wearables like a smartwatch.
Does it have to be direct neural integration to be a cyborg? Definitely people with profound brain injuries have been enhanced to the ability to interact again.
But if we have to define a criteria... I guess, integrated just enough so it can't be trivially removed, making it more of a "body part" rather than a "tool".
Point is, it'll certainly spark a discussion and re-evaluation of what's "fair", potentially shifting the consensus from somewhere around the current "glasses are fair game, but a programmable mouse is not" to somewhere more accepting of differently-abed individuals.
Well, you can on PC at least. Xbox and Playstation security has matured to the point that code modification in online games isn't really a thing anymore, the worst they have to deal with is controller macros most of the time.
1) Their secure boot implementation has never been broken, which means you can't upgrade from an exploitable version N firmware to a non-exploitable version N+1 while persisting a backdoor like you could on older systems like the PS3. You're stuck at version N until another exploit is found.
2) They rotate the crypto keys used for online play with every new firmware so they can easily lock those old exploitable firmwares out of online play for good, even if they try to spoof their version number. There's no getting around not having the new keys.
Meanwhile the Xbox One took a decade to get even a limited jailbreak that allows arbitrary code execution inside the game sandbox, but can't escape the game sandbox to take over the kernel, and the Xbox Series systems have yet to be jailbroken at all on any firmware.
Hypothetically being able to break anything with physical access doesn't count for much in practice if the thing you want to physically attack is buried inside a <7nm silicon die, doesn't trust anything outside of itself, and has countermeasures against fault injection attacks. The Switch may well be the last big victory for console hackers, the writing has been on the wall for years now.
anti web-scraping techniques
The most devious version I ever seen of this, I was baffled, astonished and completely helpless:
This website I was trying to scrap generated a new font (as in a .woff file) on every request, the font had the position of the letters randomly moved around (for example, the 'J' would be in place of the 'F' character in the .woff and so on) and the text produced by the website would be encoded to match that specific font.
So every time you loaded the website you got a completely different font with a completely different text, but for the user the text would look fine because the font mapped it to the original characters. If you tried to copy-and-paste the text from the website you would get some random garbled text.
The only way I could think of to scrap that would have been to OCR the .woff font files, but OCR could easily prevent mass-scraping due to sheer processing costs.
It's kind of annoying and prone to break but I'd rather have that than whatever Facebook is doing where every class name, ID & identifiable tags in the markup gets randomly generated every once in a while
I am actually surprised no one went: "actually that technique is called 'chicken ostrich sandwich' and was first employed in babylon in 2000BC"
You do need to determine the "correct" character code for each glyph, but there are lots of ways to do that, on a spectrum from manual to automated. And you only need to do it once.
my 2018 iPad Pro does OCR on images in Safari instantly. People only think OCR is slow because Adobe Acrobat still uses the same single-threaded OCR algo it’s had for decades now; then consider how blazing a GPU-based impl would be…
I can only assume the recent uptick is due to games adding tradable cosmetic items which has made it financially viable to cheat as most cheaters seem happy to drop a lot of money on cheats as well as $80 to re-buy a game once they eventually get banned.
I assume there is lots of cheating because of every game having matchmaking system for fair with rankings. And there’s a huge amount of people that feel locked into low ranking because of bad teammates (which makes no sense statistically speaking), and if they just bump something they would do well.
There’s others who just want to showoff an high ranking.
And the guys that just want a cheap win, at the expense of ruining everyone else game.
And then there’s the business of this. Cheat tool makers making money of these lind of people. High ranking players selling boosting services or high ranking accounts (smurfing and cheating feels very similar on the loosing side). And even the high ranking players selling player providing boosting can cheat to perform the service in less time.
Skill based matchmaking with any form of public ranking (showing a number or tier) will always be full of people trying to game the system instead of trying to get better at the game. Specially in team games.
Has a very nice advantage of if they go looking for fingerprinting they may or may not find it by random chance. It is security through obscurity but by making the bar higher for ban evasion you did actually remove a lot of people.
I don't know about CS, but TF2 has the ability to disable server MOTDs - how does that affect this?
I think even more infuriating than blatant hacking is this epidemic of "micro cheating" for lack of a better way to put it that I've seen prevalent in some games that just boost some stats or reactions by amounts large enough to help the cheater but low enough where new or inexperienced players have absolutely no way of telling if someone is cheating or genuinely good especially in games with high skill ceilings. At least when it's blatant you can leave without time wasted but when they're doing it subtly you end up getting tilted and spending the whole match with a bad taste in your mouth second guessing if someone is actually playing fair or not. Chivalry 2 is a really bad offender for this, once you notice it you can't unnotice it anymore, almost every match will have at least one guy with his swing/move speed adjusted by ~10% and in a game where swing manipulation is a legitimate mechanic it can be borderline impossible to catch someone out on it unless you're really paying attention.
Every once in a while there would be a ban wave - implying bot detection and handling was a manual / batch job process - but he'd just get 25 new copies / accounts, the income he made was more than enough to make up for it.
Of course, that assumes he was able to funnel the money out quick enough. And also, both Valve and Blizzard have their own incentive to not be too hard on bots, as they get a cut for every transaction. As long as people don't stop playing / paying because of bots.
The article addresses this specifically and concisely. It starts with “I'm not being funny and I mean no disrespect.” and then becomes very Australian.
This was for the game counter-strike (I don't remember which version, either Source or early CSGO). The logic of the cheat was:
- I manually aim, with the sniper, close to the wall of an intersection
- I press a special key, then when the pixel at the center of the screen change, simulate input mouse click to "fire"
This was fun for maybe 1-2h, but the fun was more about the success of the project (from an idea to a working cheat) than getting some free kills while playing.IIRC there is an episode on darkness diaries podcast about this.
Something like 10% of people are just maximally assholes, no justification, no reason, no rationale, they genuinely think everybody else exists only for their benefit.
This works great until you realize you're punishing innocent players because of CGNAT and IP addresses getting rotated. Cheaters usually know how to get their router to request a new IP address. That IP address then gets assigned to someone else later.
By using IP as the ban id they created a system that constantly and regularly banned completely innocent steam IDs, thinking they are somehow linked when a new steam id uses a banned IP, which is nonsense. They just did not notice because the banned gamers did not complain.
So if the same connection(plug in wall) can end up with IPs from different blocks, well, trying to do anything sensible with this is too complicated.
I remember this being hilarious when idiots would ip ban me back on the IRC days: "oh no, I have to press the reconnect button!"
Which seems to have been best practice for IPv4 and is still best practice with IPv6 :
https://www.ripe.net/publications/docs/ripe-690/#5--end-user...
However, one can pretty easily buy a wholesale account if and when that happens and skip the time-money sink.
Price will never stop them, without at least gutting your legitimate market from being way too expensive.
Fixed
For a time, I would buy keys for CS:GO and different Steam accounts and use a subscription based cheat provider to provide me with ESP/chams on screen. I knew that overwatch/admins would be seeing the demos as the accounts were new Starting from unranked meant you would be under scrutiny already so I adjusted my playstyle.
I learned not to linger around looking at walls. People's movement patterns and decision making eventually became predictable as I reviewed demos or learned in the middle of a match how players have habits and abused that information. I was able to determine when to throw a round away to avoid suspicion and deliberately ensured I had a string of 2/3 bad games every so often so my K/D wasn't insane. I never used any aim assists, spinbots etc., and I always, always communicated with my team through ingame VOIP (not giving cheat calls) and maintained a legit facade.
I went undetected for nearly 2 years and sold hundreds of CS accounts successfully and made a tidy profit doing it. It's another string of the gaming industry that brings in money and it will never go away.
I like to think of it as an online drug war, however insensitive that may seem.
If I merely change the mac address in the device connected to my cable modem, I get a new IP, every time. Combined with the fact that the game is free, so you can easily make new steam accounts.
What did I miss?
summary: guy found that the IP and steam ids can be rotated with low cost, so he used the in-game web browser to set a persistent cookie (on that installation of the game), so once cheaters get banned, rotate their IP/steam id, they'll be banned until they clear the app's data.
Which sounds trivially easy...
Interested to hear thoughts on this level of both cheating and detecting cheats
Valve worked on it for a little while patching bugs as they popped up (notoriously slowly I might add). Then in August 2017, an exploit in which server operators could execute JavaScript on players that joined their servers started to spread and was maliciously abused by bad actors. For example, some server operators using their player bases residential IP addresses to sign up to gambling websites so they got kickbacks. Others simply tried to hijack Steam accounts or sell rare Steam virtual items on the Steam marketplace to themselves.
After Valve patched the above exploit, some smaller bugs popped up in the following weeks and 2 months later in October, Valve completely binned the VGUI browser in CSGO. They had enough! This broke a lot of plugins like IdentityLogger and music players that would play music in the background as you played the game. But at least the attack vector was removed.
We do behavioural analysis on top of various fingerprinting for bot detection - some people are trying really hard to ruin the internet!
I suspect a sufficiently advanced server side behaviour analysis could do a pretty good job discovering cheaters.
Really? I would expect that a dedicated cheater would reinstall Windows (or reload from a snapshot) every time they are caught.
This violates GDPR, no?
Edit: It sounds like this took place before GDPR was being enforced.
Fraud prevention is listed as an example of a "legitimate interest."
So no, by my layman's interpretation, they would not have been bound by GDPR to notify the user of cookies or other fingerprinting used solely for anti-cheat. They'd run into trouble if they use that same ID for marketing/advertising without consent, though.
As far as I'm aware, you can get away with disclosing the fact that you are tracking "unique identifiers for the purpose of anti-cheating" in the terms and conditions, without explicitly explaining the technical details that it's a cookie.
Also, this is a server covering the Australia/New Zealand region, so it doesn't have to worry about GDPR compliance.
A person can requests to delete their data at any time, and also can request to provide all the personal data collected.
It's rather hard to make any online agreement that's more lax than the law in the EU (and call it ToS)
I saw a consent form that had 72 optional, 21 “legitimate interest” cookies.
GFB
Unfortunately it is lacking some teeth because normally opting out of all cookies should be as easy and straightforward as opting in to all cookies, but I've seen quite a few forms that hide 'reject all' behind a 'more info' button type of thing. Maybe I could file a complaint about that, I should look into it.
The UK DPA (basically a fork of GDPR) has this to say [1]: "the following purposes do constitute a legitimate interest [...] fraud prevention; ensuring network and information security; indicating possible criminal acts".
Under the Computer Misuse Act 1990 [2], there's a possible reading under which "hacking" to cheat (even if someone else does the hacking and you jsut install the program) could actually be a crime.
[1] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-re...
That said, eyeballing the chart in the article you can see an enormous ban wave that happens when the system is turned on, but afterwards the total level of cheating quickly returns to roughly where it started. If there were long term impacts it was only in the reduction of staff hours needed to review game footage to determine if a player is cheating.
Idea really is that you can identify single device time after time. So even if there is slight change in anything like software that can be easily changed that is not good enough.
Not that fingerprints should lead straight to bans, but maybe at least heightened awareness.
Cheaters ruin the fun for everyone including themselves. Admins need to provide a personal cost deterrent for problem users, and randomly hang the game for people using code mods.
Let the ban hammer fall =3
Indeed, duplicate salted-hash signatures on multiple active users mean shills, and immediate bans issued for both accounts tainted by the black list.
The trick is to randomize a mix of easy and difficult signature checks daily.
i.e. the exploit writers will have to spend time cleaning up bugs, redistributing the patches, and dealing with angry people that have a GPU that is on the blacklist for a game. The more hardware details collected, the more difficult it is to prevent tripping the admin alert.
This is already done by some studios... "Play Stupid Games, Win Stupid Prizes" as they say... =3
He took that back. A very clever nod to In Bruges. Well played sir.
If only I had a dollar for every time I was blocked somewhere just because somebody else had used the IP just before me to do bad stuff. Worst offenders out there never clear the list, even. In a world of a shortage of IPv4 that approach is just madness.
It's also the opposite of effective. More like bogus effectiveness. Only hurts innocent bystanders.
https://commission.europa.eu/resources-partners/europa-web-g...
In this case, this one might fit:
> User centric security cookies, used to detect authentication abuses and linked to the functionality explicitly requested by the user, for a limited persistent duration
> for a limited persistent duration
FTA:
> However, the VGUI browser had no issues saving cookies with expiry dates exceeding 10+ years!
So no, it doesn't even qualify.
I don't know how these would balance each other out legally, but it's fun to think about
Preventing cheaters is similar. And this is blatantly a tracking cookie.
Also, all of this was in 2017. Anyone doing it in 2024 should indeed run it past a lawyer.
This community is Australian & New Zealand based, we had 0 European players or visitors. And as @unsnap_biceps this predated GDPR compliance.
You are right though that you wouldn't be able to do this in Europe today because asking for fingerprinting consent defeats the purpose because the hacker would likely quickly figure out what is happing and circumvent it.
> But cheaters are cunts. They're cunts now, they've always been cunts.
> And the only thing that's going to change is they're going to become bigger cunts.
> Maybe have some more cunt kids.
That statement is really shows how big of a dick you are, like come on man, it's just a game. Without learning game cheats and writing trojans and botnets since 14, although I'm kind of clean now, I wouldn't have mastered C++, C# and Java together and later get deep into computer science (and cybersecurity to some extent).
What I meant was, cheating can be a good learning experience to programming for a lot of kids, because they get immediate feedback and rewards. At least that's what I see it as.
Ah, isn't that something politicians and countries around the world always do? And you think game cheating is a bigger problem?