1 / 64

Foundations of Network and Computer Security

Foundations of Network and Computer Security. J ohn Black Lecture #32 Nov 18 th 2009. CSCI 6268/TLEN 5550, Fall 2009. StackGuard. Idea (1996): Change the compiler to insert a “canary” on to the stack just after the return address

MartaAdara
Download Presentation

Foundations of Network and Computer Security

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Foundations of Network and Computer Security John Black Lecture #32 Nov 18th 2009 CSCI 6268/TLEN 5550, Fall 2009

  2. StackGuard • Idea (1996): • Change the compiler to insert a “canary” on to the stack just after the return address • The canary is a random value assigned by the compiler that the attacker cannot predict • If the canary is clobbered, we assume the return address was altered and we terminate the program • Built in to Windows 2003 Server and provided by Visual C++ .NET • Use the /GS flag; on by default (slight performance hit)

  3. Sample Stack with Canary buffer canary sfp ret Return address 4 bytes a 1 b 2 4 bytes 3 4 bytes c

  4. Canaries can be Defeated • A nice idea, but depending on the code near a buffer overflow, they can be defeated • Example: if a pointer (int *a) is a local and we copy another local (int *b) to it somewhere in the function, we can still over-write the return address • Not too far fetched since we commonly copy ptrs around

  5. Avoiding Canaries buffer SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS int *a Address of ret int *b Address of i Address of buffer int i canary sfp ret Address of buffer Return address First, overflow the buffer as shown above. Then when executing *a = *b we will copy code start addr into ret

  6. Moral: If Overruns Exist, High Probability of an Exploit • There have been plenty of documented buffer overruns which were deemed unexploitable • But plenty of them are exploitable, even when using canaries • Canaries are a hack, and of limited use

  7. Non-Executing Stacks and Return to LibC • Suppose the stack is marked as non-executable • Some hardware can enforce bounded regions for executable code • This is not the case on generic Linux, however, since all our example programs for stack overruns work just fine, but there is a Linux version which supports this • Has to do all kinds of special stuff to accommodate programs which need an executable stack • Linux uses executable stacks for signal handling • Some functional languages use an executable stack for dynamic code generation • The special version of Linux has to detect this and allow executable stacks for these processes

  8. Return to LibC: Getting around the Non-Executing Stack Problem • Assume we can still over-write the stack • 1) Set return address to system() in LibC • Use address of dynamically-linked entry point • 2) Write any sfp • 3) Write address of exit() as new ret addr • 4) Write pointer to “/bin/sh” • 5) Write string “/bin/sh”

  9. Return to LibC: Stack Configuration buffer Garbage -- Unimportant sfp --Anything-- ret Address of system() ret Address of exit() ptr to s s “/bin/sh” First, overflow the buffer as shown above. When function returns, we go to system(“/bin/sh”) which spawns a shell

  10. Automated Source Code Analysis • Advantages: • Can be used as a development tool (pre-release tool) • Can be used long after release (legacy applications) • Method is proactive rather than reactive • Avoid vulnerabilities rather than trying to detect them at run-time • In order to conduct the analysis, we need to build a model of the program • The model will highlight features most important for security

  11. Password Crackers • Unix approach: store one-way hash of password in a public file • Since hash is one-way, there is no risk in showing the digest, right? • This assumes there are enough inputs to make exhaustive search impossible (recall IP example from the midterm) • There are enough 10-char passwords, but they are NOT equally likely to be used • HelloThere is more likely than H7%$$a3#.4 because we’re human

  12. Password Crackers (cont) • Idea is simple: try hashing all common words and scan for matching digest • Original Unix algorithm for hash is to iterate DES 25 times using the password to derive the DES key • DES25(pass, 064) = digest • Note: this was proved secure by noticing that this is the CBCMAC of (064)25 under key ‘pass’ and then appealing to known CBCMAC results • Why is DES iterated so many times?

  13. Password Crackers (cont) • Note: Actually uses a variant of DES to defeat hardware-based approaches • Note: Modern implementations often use md5 instead of this DES-based hash • But we can still launch a ‘dictionary attack’ • Take large list of words, names, birthdays, and variants and hash them • If your password is in this list, it will be cracked

  14. Password Crackers: example Pasword file /etc/passwd digest word xf5yh@ae1 alabaster jones:72hadGKHHA% albacore &trh23Gfhad smith:HWjh234h*@!!j! alkaline Hj68aan4%41 jackl:UwuhWuhf12132^ taylor:Hj68aan4%41 bradt:&sdf29jhabdjajK22 knuth:ih*22882h*F@*8haa wirth:8w92h28fh*(Hh98H wont4get 7%^^1j2labdGH rivest:&shsdg&&hsgDGH2

  15. Making Things Harder: Salt • In reality, Unix systems always add a two-character “salt” before hashing your password • There are 4096 possible salts • One is randomly chosen, appended to your password, then the whole thing is hashed • Password file contains the digest and the salt (in the clear) • This prevents attacking all passwords in /etc/passwd in parallel

  16. Password Crackers: with Salt Table for Salt Value: A6 Pasword file /etc/passwd digest word xf5yh@ae1 alabaster jones:72hadGKHHA%H7 albacore &trh23Gfhad smith:HWjh234h*@!!j!YY alkaline U8&@H**12 jackl:UwuhWuhf12132^a$ taylor:Hj68aan4%41y$ no match bradt:&sdf29jhabdjajK22Ja knuth:ih*22882h*F@*8haaU% wirth:8w92h28fh*(Hh98H1& wont4get 7%^^1j2labdGH rivest:&shsdg&&hsgDGH2*1

  17. Fighting the Salt: 4096 Tables • Crackers build 4096 tables, one for each salt value • Build massive databases, on-line, for each salt • 100’s of GB was a lot of storage a few years ago, but not any longer! • Indexed for fast look-up • Most any common password is found quickly by such a program • Used by miscreants, but also by sysadmins to find weak passwords on their system

  18. Getting the /etc/passwd File • Public file, but only if you have an acct • There have been tricks for remotely fetching the /etc/passwd file using ftp and other vulnerabilities • Often this is all an attacker is after • Very likely to find weak passwords and get on the machine • Of course if you are a local user, no problem • Removing the /etc/passwd from global view creates too many problems

  19. Shadowed Passwords • One common approach is to put just the password digests into /etc/shadow • /etc/passwd still has username, userid, groupid, home dir, shell, etc., but the digests are missing • /etc/shadow has only the username and digests (and a couple of other things) • /etc/shadow is readable and writeable for root only • Makes it a bit harder to get a hold of • Breaks some software that wants to authenticate users with their passwords • One might argue that non-root software shouldn’t be asking for user passwords anyhow • BSD does things a little differently

  20. Last Example: Ingres Authorization Strings • Ingres, 1990 • 2nd largest database company behind Oracle • Authorization Strings • Encoded what products and privileges the user had purchased • Easier to maintain this way: ship entire product • Easier to sell upgrades: just change the string • Documentation guys • Needed an example auth string for the manual

  21. Moral • There’s no defending against stupidity • Social engineering is almost always the easiest way to break in • Doesn’t work on savvy types or sys admins, but VERY effective on the common user • I can almost guarantee I could get the password of most CU students easily • “Hi this is Jack Stevens from ITS and we need to change your password for security reasons; can you give me your current password?”

  22. Social Engineering: Phishing • Sending authentic looking email saying “need you to confirm your PayPal account information” • Email looks authentic • URL is often disguised • Rolling over the link might even pop-up a valid URL in a yellow box! • Clicking takes you to attacker’s site, however • This site wants your login info

  23. Disguising URLs • URI spec • Anything@http://www.colorado.edu is supposed to send you to www.colorado.edu • Can be used to disguise a URL: • http://www.ebay.com-SECURITYCHECKw8grHGAkdj>jd7788<AccountMaintenace-4957725-s5982ut-aw-ebayconfirm-secure-23985225howf8shfMHHIUBd889yK@www.evil.org • Notice feel-good words • Length of URI exceeds width of browser, so you may not see the end • www.evil.org could be hex encoded for more deception • %77%77%77%2e%65%76%69%6c%2e%63%6f%6d = www.evil.com

  24. Disguising URL’s (cont) • This no longer works on IE • Still works on Mozilla • In IE 5.x and older, there was another trick where you could get the toolbar and URL window to show “www.paypal.com” even though you had been sent to a different site • Very scary • Moral: don’t click on email links; type in URL manually

  25. Digression: Character Encodings • Normally web servers don’t allow things like this: • http://www.cs.colorado.edu/~jrblack/../../etc/passwd • The “..” is filtered out • Character encodings can sometimes bypass the filter • Unicode is a code for representing various alphabets • . = C0 AE • / = C0 AF • \ = C1 9C • In Oct 2000, a hacker revealed that IIS failed to filter these encodings • …/~jrblack/%C0AE/%C0AE/%C0AE/%C0AE/etc/passwd

  26. Segue to Web Security • The web started out simple, but now is vastly complex • Mostly because of client-side scripting • Javascript, Java applets, Flash, Shockwave, VBScript, and more • And server-side scripting • CGIs (sh, C, perl, python, almost anything), Java servlets, PHP, ASP, JSP • All of these are security-sensitive • Ugh

  27. We Can’t Hope to Survey all Possible Web Security Issues • Too many to count • Goal: look at a few thematic ones • Cataloguing all of them would not be very instructive, most likely

  28. Typical Server-Side Vulnerability • PHP: Personal HomePage (?!) • An easy-to-use and Perl-like server-side scripting language • A “study in poor security” – Gary McGraw • Variables are dynamically declared, initialized, and global by default; this can be dangerous: • if(isset($page)) {   include($page); } • Now we call this script with: • script.php?page=/etc/passwd

  29. Javascript • Javascript (and VBScript) can do bad things • Get your cookies, for one, which may include sensitive information • You don’t want to run scripts unless the site you are visiting is “trustworthy” • Javascript has had a large number of security problems in the past; dubious sites might take advantage of these • If you set your security level high in IE, it turns off Javascript; that should tell you something

  30. Javascript (cont) • Turning it off in your browser is one solution • But often you lose a bunch of functionality • How can an attacker get you to run his malicious Javascript code? • Gets you to visit his website • But you might know better • Old trick: post something to a bulletin board with <script>…</script> in it • When you view his post, you run his script!

  31. Filtering • To prevent this, a correct bulletin board implementation always filters stuff that others have posted • You can post <b>YES!</b> but not <script> evil stuff… </script> • But until recently we didn’t worry about filtering stuff from you to yourself! 

  32. XSS Attacks • XSS: Cross Server Scripting • Not CSS (Cascading Style Sheets) • Idea: you open a website, passing a value, and the site echoes back that value • What if that value has a script embedded?! • Example: 404 Not Found • Suppose you see a link (in email, on IRC, on the web) saying, “Click here to search Google” • The link really does go to google, so what the heck… • However the link is www.google.com/badurl%0a%5C... • Above contains an embedded, hidden script • Google says, “badurl%0a%5C…” not found • Just displaying this to you, executes the script

  33. XSS Vulnerabilities • They’ve been found all over the web • Fairly new problem • Lots of examples still exist in the wild • Very tricky to find them all • Solution is to filter, of course • Need to filter inputs from users that server will be echoing back to the user

  34. Phishing Revisited Dear Amazon User, During our regular update and verification of the accounts, we could not verify your current information. Either your information has changed or it is incomplete. As a result, your access to buy on Amazon has been restricted. To continue using your Amazon account again, please update and verify your information by clicking the link below : http://www.amazon.com@service02.com/exec/obidos/subst/home/?EnterConfirm&UsingSSL=0&pUserId=&us=445&ap=0&dz=1&Lis=10&ref=br_bx_c_2_2 Thank you very much for your cooperation! Amazon Customer Support Please note: This e-mail message was sent from a notification-only address that cannot accept incoming e-mail. Please do not reply to this message. Amazon.com Earth's Biggest Selection

  35. Where does the info go? • service02.com maps to IP 66.218.79.155 % whois 66.218.79.155 OrgName: Yahoo! OrgID: YAOO Address: 701 First Avenue City: Sunnyvale StateProv: CA NetRange: 66.218.64.0 - 66.218.95.255

  36. Defenses Against Phishing • Spoofguard • Product out of Stanford • Doesn’t work for me (ugh!) • Detects various suspicious behaviors and flags them • Red light, green light depending on threshold • There are others as well • Bottom line: • Don’t believe emails from “legitimate companies” • This is frustrating for companies!

  37. Wireless Security • Why is wireless security essentially different from wired security? • Almost impossible to achieve physical security on the network • You can no longer assume that restricting access to a building restricts access to a network • The “parking lot attack”

  38. Wireless Security Challenges • Further challenges: • Many wireless devices are resource-constrained • Laptops are pretty powerful these days but PDAs are not • Sensors are even more constrained • RFIDs are ridiculously constrained • Paradox: the more resource-constrained we get, the more ambitious our security goals tend to get

  39. IEEE 802.11a/b/g • A standard ratified by IEEE and the most widely-used in the world • Ok, PCS might be a close contender • Also called “Wi-Fi” • 802.11 products certified by WECA (Wireless Ethernet Compatibility Alliance) • Bluetooth is fairly commonplace but not really used for LANs • More for PANs (the size of a cubicle) • Connect PDA to Cell Phone to MP3, etc.

  40. Wireless Network Architecture • Ad Hoc • Several computers form a LAN • Infrastructure • An access point (AP) acts as a gateway for wireless clients • This is the model we’re most used to • Available all through the EC, for example

  41. My Access Point

  42. War Driving • The inherent physical insecurity of wireless networks has led to the “sport” of war-driving • Get in your car, drive around, look for open access points with you laptop • Name comes from the movie “War Games” • Some people get obsessed with this stuff • You can buy “war driving kits” on line • Special antennas, GPS units to hook to you laptop, mapping software

  43. More War Driving • People use special antennas on their cars • It used to be Pringles cans, but we’ve moved up in the world • People distribute AP maps • War driving contest at BlackHat each year

  44. Next Time You’re in LA

  45. What’s the Big Deal? • My home access point is wide-open • People could steal bandwidth • I’m not that worried about it • People could see what I’m doing • I’m not that worried about it • There are ways to lock-down your access point • MAC filtering • Non-signalling APs and non-default SSIDs • Wired Equivalent Privacy (WEP)

  46. MAC Filtering • Allow only certain MACs to associate • Idea: you must get permission before joining the LAN • Pain: doesn’t scale well, but for home users not a big deal • Drawback: people can sniff traffic, figure out what MACs are being used on your AP, then spoof their MAC address to get on

  47. Non-Signalling APs • 802.11 APs typically send a “beacon” advertising their existence (and some other stuff) • Without this, you don’t know they’re there • Can be turned off • If SSID is default, war drivers might find you anyway • SSID is the “name” of the LAN • Defaults are “LinkSYS”, NETGEAR, D-Link, etc • Savvy people change the SSID and turn off beacons • SSID’s can still be sniffed when the LAN is active however, so once again doesn’t help much

  48. Let’s Use Crypto! • WEP (Wired Equivalent Privacy) • A modern study in how not to do things • The good news: it provides a wonderful pedagogical example for us • A familiar theme: • WEP was designed by non-cryptographers who knew the basics only • That’s enough to blow it

More Related