Is it possible to secure software so that limitations can't be hacked?
-
Everything is hackable, it just depends on how much time someone will investigate...
-
Nothing can be 100% secure, especially if you let people run software on their own computers. They can use any kind of debugging tools to analyze it when they run it. Even companies like Microsoft that completely control their toolchain can't stop the existence of pirated copies of Windows. Your application probably has a lot less interest from hackers than Windows does, so pretty minimal effort will keep most people from tampering with it.
Keep the limit out of a config file. If you app ships with a limits_settings.json with a line that says, '"SuperImportantLimit": 4' in plain text, then probably any ordinary person can figure out how to hack it if they ever open the file in a text editor.
Next most obvious step would be to move that settings file into a resource that gets compiled into the application binary, rather than getting shipped alongside it. That's better because it's still pretty easy to run 'strings' on the binary and see that SuperImportantLimit is in there, and break out a hex editor to change the value stored in the compiled resource.
The next most obvious step after that is to just make sure that the name of the thing is obscured, non-obvious, etc. Doesn't matter how - a resource named some gibberish like HD#73DJXqq, an md5 hash of the real name, a name xor'd by a key. None of it is really secure against a dedicated attacker, but it means there's nothing super obvious to a passing observer giving it a once-over.
There are lots of more sophisticated ways to do app security. You can spend a ton of money on third party solutions that will be more secure. But the reality is that even the most sophisticated solutions still aren't very secure against a sophisticated and dedicated attacker with some free time, a VM, and a debugger. Embedded + Obfuscated gets you about 90% of the benefit of a complicated solution by deterring the casual tweaker, for about 10% of the effort.
If your app were so valuable, and facing so many attackers that this isn't valid logic, you'd probably be asking your question somewhere more specific than a general topics forum, like your department's internal Security Architects.
-
"Security" through obfuscation is as useful as a fish has need for a bicycle. A half-decent c/c++ programmer should already know enough to dissasemble the code, run it through the debugger, break at the system calls (like reading a resource file, or opening a window) and with minimal effort trace it back to that particular conditional jump. Changing the conditional jump to an unconditional then is a matter of seconds.
Even the more sophisticated systems that do in-memory loading and decoding of binaries can be broken by a decent programmer in a couple of days to a week. Don't forget the code is already there, the difference from what you wrote is that it's just assembly. The commercial products and versions are tracked by the crackers, that's why you get the latest software cracked basically the day after it was released.