If I want to be more of a defensive programmer where do I start?

12    31 Jul 2015 01:24 by u/ddrt

I'm interested in cyber security and I would like to know where to start as far as defensive programing goes. What do I need to know? What should I be actively learning, practicing, reading? What should I look out for that could potentially take down my employer's system (or my system)? Basically how do I defend against hacking, scamming, digital abuse?

Thank you for your time. (If this is the wrong place for this discussion please let me know).

8 comments

8
  • Don't use unsafe languages like C and C++ to avoid the entire class of buffer overflow errors
  • Never trust input, even if you're digging up old input from a database
  • Watch out for timing attacks - use constant time string comparison for hashes
  • Rate limit online services
  • Always use memory-heavy hashing functions with unique salts for passwords
  • Watch out for race conditions
  • Use white-lists as opposed to blacklists

First few things that come to mind.

6

Assorted stuff, which comes to my mind after a few years in industry:

  1. Be paranoid

  2. Don't read/save data that you don't need, try to convince designers (if possible) to avoid storing sensitive data, if your app can work fine without it

  3. Read stuff from netsec sub, and owasp. If you write web applications/services scan them with Owasp ZED

  4. Know what's happening underneath. Don't trust libraries blindly - try to at least learn their general logic. For example - take the famous bash "shellshock" bug - some people didn't even know they could have been affected, because they didn't know that their software is calling the default system shell to execute something.

  5. Don't try to invent your own encryption/hashing/password hashing. Also try to stay up to date with what is currently the most secure solution. For example - today I would stick to TLS for connection cryptography, sha3 for hashing/HMAC, bcrypt for passwords.

  6. Use prepared statements when communicating with DBs. If your tools don't allow that, change tools, as in 2015 it's not acceptable.

  7. With frontend services/sites - filter outgoing data, don't try to protect from HTML/script injection on the input layer, as you'll usually fail, and you might end up double-escaping stuff (which sometimes can introduce new dangers)

  8. Handle your encodings correctly. If for example your filter assumes that the input is utf-8, and your logic takes it as ISO-8859-01, your filter may ignore dangerous characters, and let some "s slip through.

  9. Ask others to review your code if you're not feeling well with it. Offer workmates a beer for finding a security hole in it, once you're confident

  10. If you filter stuff, know that 127.211.112.12 is also localhost (whole 127.0.0.0/8 is)

  11. Also know that http://3331396748/ is a perfectly fine URL

  12. So is http://voat.co.

  13. If you use C++ don't write as if you were using Java - objects don't have to be allocated with new - leave them on the stack, unless you really need to have them allocated dynamically

  14. In C - watch all your string operations, use valgrind and maybe some kind of fuzzer, many great bugs could have been avoided in this way

  15. If you're caching stuff, make sure that the cache is safe (for example if you display user's private messages, don't cache the template in /tmp/ . Preferably don't cache it at all

  16. In Java - use char[] or some secure class for sensitive data - String will stay in your memory until your program dies, or possibly longer. You can't overwrite String with zeroes, like you would with char[]

  17. Never ever trust HTTP headers, especially referer

  18. In webapps - use CSRF always with your forms, even if you don't think it's needed

  19. Also in webapps - don't perform any data modifying operations with GET links, especially with predictable get links - everyone can embed an image like <img src='http://yoursite.com/user/grantPermission?perm=admin&user=evilHacker" /> in their page, and lure one of the admins to it

  20. Avoid security-by-obscurity approach, but don't make it too easy for the attackers too. Hiding some obvious stuff will deter script-kiddies

  21. Secure your error messages, make sure that your crashing webserver/webapp doesn't spit put whole exception traces

  22. If you use tempfile watch your permissions, make sure that you're writing to the same file that you created, use mkstemp or equivalents

  23. If you write a suid program/daemon, do the suid-requiring stuff, and as soon as you're done, drop your privilege

  24. Don't use regexes for html parsing

  25. Prefer whitelists over blacklists when validating stuff

  26. Make sure that the default configuration of your program is secure

  27. Actually try to make the insecure configuration difficult and obvious (for example, it's more a safete than security feature, but I like hdparm's approach - if you want to do dangerous you have to add an additional paramter: hdparm -J 300 --please-destroy-my-drive /dev/sdX)

  28. Minimize the attack surface - the less input/types of input/methods/services/ports you make available, the less combinations of attack are possible

  29. Use secure random generators, /dev/urandom on linux sucks, /dev/random is a minimum (and it's not perfect either)

  30. Don't log sensitive data... I know it sucks, but know that once in a while someone will end up with DEBUG log level in production

3

I think you were looking for /v/netsec

3

That's a large, complex topic, but since this is /v/programming here are a couple tips from a coding perspective:

  • Even if you don't use it, pay attention to what the OpenBSD project is doing. New techniques are born there and are first used there before adoption by other operating systems (W^X, ASLR, reallocarray and friends, entropy system call). The only problem is that they frequently pick awful names!

  • Learn how to exploit common software vulnerabilities (return oriented programming, stack smashing, integer overflows, buffer overflows, etc.). You'll learn what sorts of things to watch for in your own code. Imagine what sorts of inputs would put your program in a bad state, and protect it against those inputs.

2

Security is one of the primary reasons I am obsessed with Haskell and Rust, as well as proof oriented systems like Idris, Coq, and Agda. Languages like these allow for strong security guarantees at both the runtime and logic level. They allow of the security of you system to be verified at compile time using the type systems.

I feel like the software industry as a whole does not take security seriously enough and in a saner world exploits like buffer overflows could have been extinct by using smarter languages and compilers. but inertia is what it is.

1

In my opinion, the best way to learn how to write safe code is by learning how attacks work. Build a server and hack the crap out of it. Scan it, compromise it, patch it, and review what the patch did. Learn from others' mistakes! When you see how easy it is to use SQL injection or write a buffer overflow attack, you gain a level of appreciation for the value of "never trust input".

0

Most of the responses here paper over the problem or argue to be more vigilant or use tools to be vigilant for you. Vigilance is not a complete approach to security as one error can lose everything.

You should change your programming paradigm entirely and establish security boundaries between separate areas of concern so that even if one area of concern is compromised than other areas of concern are not. Generally, one establishes security boundaries at a computer programming level by breaking up computer programs into separate and differently privileged chunks of code, differently privileged processes, differently privileged user accounts, differently privileged virtual machines and differently privileged physical machines. You can even expand on this approach and give things in the physical world such as different people, businesses or other physical objects different privileges, accounts, passwords and keys.