The fact that companies think client side anti cheat is a good idea is so insane. Maybe try designing your server better instead of blaming the operating system for not letting you control your users
Aside from better server side detection, which is I agree is severely underdeveloped, I’d say that the next big step should be a much bigger reliance on reputation-based matchmaking, ideally across games. It would need to be built in a way that’s not abusable by devs or trolls and should be as privacy-respecting as much as possible (as in, not having to validate with your ID South-Korean style), which isn’t an easy task. Working properly however, it would keep honest players from seeing any cheaters at all with no client-side anticheat required at all, which would be nice.
Genuinely curious, because this isn’t my area of expertise, but how do you design a server to be “better” if it has to trust data from a remote client?
Example, if the client is compromised - because as they’ve said, they have no way to “attest” that the kernel is not compromised - how would the server know any better?
If my Apex client tells the server I got a perfect headshot, how would the server know I didn’t fake the data? Is there a real answer to this problem or are we just wishing they come up with an impossible solution?
My general understanding is that EA is 100% correct. Now, on the other hand, maybe the should just limit plays between Linux <-> Linux so people can at least still enjoy the game (I’m moving to Linux soon so I’ll basically no longer be able to play the game, which is, as my primary gaming addiction, a huge loss I’m willing to take).
There’s compromises EA could take, but I think the Linux market share is just too small for them to care to spend any resources - even though they’re raking in billions (~$3.4 Billion) and could spare a few resources to find a good middle ground. Capitalism at it’s finest.
How do they know you haven’t trained an AI to get headshots? The cheats often break the bounds of what is realistic in games, whether it is allowing you to see through walls (server shouldn’t be sending enemy positions that aren’t in view), going too fast (server should speed check pplayer positions), getting items they shouldn’t have (server should do inventory sanity checks), etc. Other than that, look for signs of automated movement/things unrealistically precise for a human to do. Eventually the cheating will just be moved to a separate air gapped computer running AI on the video feed. Client side is an invasive, broken, and malicious concept.
Just tracking trended data in general would be sufficient to defeat a LARGE number of common cheats. One of the very few use cases “AI” might actually work for in a positive way. But that puts the burden on the developers and server hosters, and it’s much easier to just burden the players directly instead.
Servers often don’t send player data that is outside of the immediate area of the player, but they have to for enemies that are nearby. If they walk around the corner and your client didn’t know about it, then you’ll be waiting for your ping time to even render the enemy. I.e. they walk around the corner and already shot you, then you see them suddenly appear a full players width away from the corner, and you die. Aka peekers advantage amplified.
Same deal with footstep sounds, bullet tracers, a player’s shadow, etc. Your client needs to know where all this is coming from and it can’t do that if it doesn’t know the enemy exists and where. And that is a buffer zone for hackers to derive wall hacks from.
So basically, the overwhelming majority of servers do do all those things, since the late 90’s. Hacks tend to work within those bounds. The most common, impactful and hard to detect cheats are based on providing perfect mechanical inputs. Aka aim hacks. Nothing about limiting info from the server can prevent that unless you also want the legitimate player to be unable to see their enemies.
Well thank god this computer genius is on the scene. Don’t worry, EA can solve everything as soon as they hear about these great and very original ideas.
Right, but the server is still receiving data from the client. If the client sends a plausible head shot, even though it was actually a miss, how would the server know? You still need client-side “police”, AKA anti-cheat software to mitigate a significant type of software-based hacks.
Now that I’ve typed it out, cops are actually a great analogy to anti-cheat software. Cops play the exact same role. Nobody wants them around until a crime has been committed. Cops/anti-cheat software don’t catch everyone, but the threat of being caught mitigates some crime/hacks, and for the cases where criminals/hackers are caught, society/gamers are better off for it.
In closing ACAB - I completely understand why we don’t want anti-cheat software on our computers, but there really is no better way; or if there is, I still haven’t heard it.
If my Apex client tells the server I got a perfect headshot, how would the server know I didn’t fake the data?
Any game that works like that is fundamentally flawed and AC is nothing but an attempt at a cheap bandaid at best.
The client should be doing nothing but rendering and sending player actions to the server and the server should be managing the game state as well as running its checks on those actions. And when one client sends actuons that are weird and doesn’t line up with it’s internal game state it should kick the client immediately always deferring to what ITS game state is telling it, not the client.
And when one client sends actuons that are weird and doesn’t line up with it’s internal game state
What if my hacked client sends actions that are not weird, completely plausible, but didn’t happen and instead were faked? E.g. I take a headshot and would have missed, but my client sends data that I actually shot them dead center, because I wasn’t completely off? How would the server know it wasn’t me?
Yes, people can still cheat with a camera and manipulating inputs. There will never be a way around that.
But that’s entirely unchanged by adding malware, that, even if it could theoretically work, should be a literal crime with serious jail time attached. Client side validation is never security and cannot resemble security.
There are lots of options such that you can tune your false positive/negative rate. 🤷♂️ Tons of ways you can structure this depending on your game’s tech.
No options that resemble legitimate or evidence based in any way.
If a computer has the exact same input and output tools as a human, you cannot possibly do better than guessing. It is a literal certainty that you will ban legitimate players doing nothing wrong for being too good if you try, and it’s unconditionally not acceptable to do so.
The fact that companies think client side anti cheat is a good idea is so insane. Maybe try designing your server better instead of blaming the operating system for not letting you control your users
Aside from better server side detection, which is I agree is severely underdeveloped, I’d say that the next big step should be a much bigger reliance on reputation-based matchmaking, ideally across games. It would need to be built in a way that’s not abusable by devs or trolls and should be as privacy-respecting as much as possible (as in, not having to validate with your ID South-Korean style), which isn’t an easy task. Working properly however, it would keep honest players from seeing any cheaters at all with no client-side anticheat required at all, which would be nice.
Genuinely curious, because this isn’t my area of expertise, but how do you design a server to be “better” if it has to trust data from a remote client?
Example, if the client is compromised - because as they’ve said, they have no way to “attest” that the kernel is not compromised - how would the server know any better?
If my Apex client tells the server I got a perfect headshot, how would the server know I didn’t fake the data? Is there a real answer to this problem or are we just wishing they come up with an impossible solution?
My general understanding is that EA is 100% correct. Now, on the other hand, maybe the should just limit plays between Linux <-> Linux so people can at least still enjoy the game (I’m moving to Linux soon so I’ll basically no longer be able to play the game, which is, as my primary gaming addiction, a huge loss I’m willing to take).
There’s compromises EA could take, but I think the Linux market share is just too small for them to care to spend any resources - even though they’re raking in billions (~$3.4 Billion) and could spare a few resources to find a good middle ground. Capitalism at it’s finest.
How do they know you haven’t trained an AI to get headshots? The cheats often break the bounds of what is realistic in games, whether it is allowing you to see through walls (server shouldn’t be sending enemy positions that aren’t in view), going too fast (server should speed check pplayer positions), getting items they shouldn’t have (server should do inventory sanity checks), etc. Other than that, look for signs of automated movement/things unrealistically precise for a human to do. Eventually the cheating will just be moved to a separate air gapped computer running AI on the video feed. Client side is an invasive, broken, and malicious concept.
Just tracking trended data in general would be sufficient to defeat a LARGE number of common cheats. One of the very few use cases “AI” might actually work for in a positive way. But that puts the burden on the developers and server hosters, and it’s much easier to just burden the players directly instead.
I’m fairly confident that developers already do this. When the “ban hammer” comes down it is probably after analysing data trends for players.
Servers often don’t send player data that is outside of the immediate area of the player, but they have to for enemies that are nearby. If they walk around the corner and your client didn’t know about it, then you’ll be waiting for your ping time to even render the enemy. I.e. they walk around the corner and already shot you, then you see them suddenly appear a full players width away from the corner, and you die. Aka peekers advantage amplified.
Same deal with footstep sounds, bullet tracers, a player’s shadow, etc. Your client needs to know where all this is coming from and it can’t do that if it doesn’t know the enemy exists and where. And that is a buffer zone for hackers to derive wall hacks from.
So basically, the overwhelming majority of servers do do all those things, since the late 90’s. Hacks tend to work within those bounds. The most common, impactful and hard to detect cheats are based on providing perfect mechanical inputs. Aka aim hacks. Nothing about limiting info from the server can prevent that unless you also want the legitimate player to be unable to see their enemies.
Well thank god this computer genius is on the scene. Don’t worry, EA can solve everything as soon as they hear about these great and very original ideas.
Because the actual calculations aren’t done by the client but the server, or they should be
Right, but the server is still receiving data from the client. If the client sends a plausible head shot, even though it was actually a miss, how would the server know? You still need client-side “police”, AKA anti-cheat software to mitigate a significant type of software-based hacks.
Now that I’ve typed it out, cops are actually a great analogy to anti-cheat software. Cops play the exact same role. Nobody wants them around until a crime has been committed. Cops/anti-cheat software don’t catch everyone, but the threat of being caught mitigates some crime/hacks, and for the cases where criminals/hackers are caught, society/gamers are better off for it.
In closing ACAB - I completely understand why we don’t want anti-cheat software on our computers, but there really is no better way; or if there is, I still haven’t heard it.
Any game that works like that is fundamentally flawed and AC is nothing but an attempt at a cheap bandaid at best.
The client should be doing nothing but rendering and sending player actions to the server and the server should be managing the game state as well as running its checks on those actions. And when one client sends actuons that are weird and doesn’t line up with it’s internal game state it should kick the client immediately always deferring to what ITS game state is telling it, not the client.
What if my hacked client sends actions that are not weird, completely plausible, but didn’t happen and instead were faked? E.g. I take a headshot and would have missed, but my client sends data that I actually shot them dead center, because I wasn’t completely off? How would the server know it wasn’t me?
Your core premise is broken. Relying on trusting anything from a remote client cannot possibly result in a fair game.
Too bad the server at least needs the player input data.
Yes, people can still cheat with a camera and manipulating inputs. There will never be a way around that.
But that’s entirely unchanged by adding malware, that, even if it could theoretically work, should be a literal crime with serious jail time attached. Client side validation is never security and cannot resemble security.
There are ways to detect and stop that, but they can and should happen on the server, not on the client.
Only if you’re OK banning real people.
There are lots of options such that you can tune your false positive/negative rate. 🤷♂️ Tons of ways you can structure this depending on your game’s tech.
No options that resemble legitimate or evidence based in any way.
If a computer has the exact same input and output tools as a human, you cannot possibly do better than guessing. It is a literal certainty that you will ban legitimate players doing nothing wrong for being too good if you try, and it’s unconditionally not acceptable to do so.