1 / 218

Security in Computing Chapter 3, Program Security

Security in Computing Chapter 3, Program Security. Summary created by Kirk Scott. 3.1 Secure Programs 3.2 Non-Malicious Program Errors 3.3 Viruses and Other Malicious Code 3.4 Targeted Malicious Code 3.5 Controls Against Program Threats 3.6 Summary of Program Threats and Controls.

deacon
Download Presentation

Security in Computing Chapter 3, Program Security

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Security in ComputingChapter 3, Program Security Summary created by Kirk Scott

  2. 3.1 Secure Programs • 3.2 Non-Malicious Program Errors • 3.3 Viruses and Other Malicious Code • 3.4 Targeted Malicious Code • 3.5 Controls Against Program Threats • 3.6 Summary of Program Threats and Controls

  3. Treatment of topics will be selective. • You’re responsible for everything, but I will only be presenting things that I think merit some discussion • You should think of this chapter and the following ones as possible sources for presentation topics

  4. 3.1 Secure Programs • In general, what does it mean for a program to be secure? • It supports/has: • Confidentiality • Integrity • Availability

  5. How are confidentiality, integrity, and availability measured? • Given the ability to measure, what is a “passing” score? • Conformance with formal software specifications? • Some match-up with “fitness for purpose”? • Some absolute measure?

  6. “Fitness for purpose” is the “winner” of the previous list • But it is still undefined • What is the value of software, and what constitutes adequate protection? • How do you know that it is good enough, i.e., fit for the purpose? • (This will remain an unanswered question, but that won’t stop us from forging ahead.)

  7. Code Quality—Finding and Fixing Faults • When measuring code quality, whether quality in general or quality for security: • Empirically, the more faults found, the more there are yet to be found • Faults found tends to be a negative, not a positive indicator of code quality

  8. Find and fix is a bad model generally: • You may not be looking for, and therefore not find, serious faults • Even if you do, you are condemned to trying to fix the code after the fact • After the fact fixes, or patches, tend to have bad characteristics

  9. Patches focus on the immediate problem, ignoring its context and overall meaning • There is a tendency to fix in one spot, not everywhere that this fault or type of fault occurs • Patches frequently have non-obvious side-effects elsewhere • Patches often cause another fault or failure elsewhere • Frequently, patching can’t be accomplished without affecting functionality or performance

  10. What’s the Alternative to Find and Fix? • Security has to be a concern from the start in software development • Security has to be designed into a system • Such software won’t go down the road of find and fix • The question remains of how to accomplish this

  11. The book presents some terminology for talking about software security • This goes back to something familiar to programmers • Program bugs are of three general kinds: • Misunderstanding the problem • Faulty program logic • Syntax errors

  12. IEEE Terminology • Human error can cause any one or more of these things • In IEEE quality terminology, these are known as faults • Faults are an internal, developer-oriented view of the design and implementation of security in software

  13. The IEEE terminology also identifies software failures • These are departures from required behavior • In effect, these are run-time manifestations of faults • They may actually be discovered during walk-throughs rather than at run time

  14. Note that specifications as well as implementations can be faulty • In particular, specifications may not adequately cover security requirements • Therefore, software may “fail” even though it’s in conformance with specifications • Failures are an external, user-oriented view of the design and implementation of security in software

  15. Book Terminology • The framework presented by the book, beginning in chapter 1, is based on this alternative terminology: • Vulnerability: This is defined as a weakness in a system that can be exploited for harm. • This seems roughly analogous to a fault. • It is more general than three types of programming errors • However, it’s more specific to security • It concerns something that is internal to the system

  16. Program Security Flaw: This is defined as inappropriate program behavior caused by a vulnerability. • This seems roughly analogous to a failure. • However, this inappropriate behavior in and of itself may not constitute a security breach. • It is something that could be exploited. • It concerns something that could be evident or taken advantage of externally.

  17. The Interplay between Internal and External • Both the internal and external perspectives are important • Evident behavior problems give a sign that something inside has to be fixed • However, some faults may cause bad behavior which isn’t obvious, rarely occurs, isn’t noticed, or isn’t recognized to be bad • The developer has to foresee things on the internal side as well as react to things on the external side

  18. Classification of Faults/Vulnerabilities • Intentional: A bad actor may intentionally introduce faulty code into a software system • Unintentional: More commonly, developers write problematic code unintentionally • The code has a security vulnerability and attackers find a way of taking advantage of it

  19. Challenges to Writing Secure Code • The size and complexity of code is a challenge • Size alone increases the number of possible points of vulnerability • The interaction of multiple pieces of code leads to many more possible vulnerabilities • Specifications are focused on functional requirements: • What the code is supposed to do

  20. It is essentially impossible to list and test all of the things that code should not allow. • This leaves lots of room both for honest mistakes and bad actors

  21. Changing technology is also both a boon and a bane • The battle of keeping up is no less difficult in security as in other areas of computing • Time is spent putting out today’s fires with today’s technologies while tomorrow’s are developing • On the other hand, some of tomorrow’s technologies will help with security as well as being sources of new concerns

  22. Six Kinds of Unintentional Flaws • Intentionally introduced malicious code will be covered later. • Here is a classification of 6 broad categories of unintentional flaws in software security: • 1. Identification and authorization errors (hopefully self-explanatory) • 2. Validation errors—incomplete or inconsistent checks of permissions

  23. 3. Domain errors—errors in controlling access to data • 4. Boundary condition errors—errors on the first or last case in software • 5. Serialization and aliasing errors—errors in program flow order • 6. General logic errors—any other exploitable problem in the logic of software design

  24. 3.2 Non-Malicious Program Errors • There are three broad classes of non-malicious errors that have security effects: • 1. Buffer overflows • 2. Incomplete mediation • 3. Time-of-check to time-of-use errors

  25. Buffer Overflows • The simple idea of an overflow can be illustrated with and out of bounds array access • In general, in a language like C, the following is possible: • char sample[10]; • sample[10] = ‘B’; • Similar undesirable things can be even more easily and less obviously accomplished when using pointers (addresses) to access memory

  26. Cases to Consider • 1. The array/buffer is in user space. • A. The out of bounds access only steps on user space. • It may or may not trash user data/code, causing problems for that process. • B. The 10th position in the array would be outside of the process’s allocation. • The O/S should kill the process for violating memory restrictions.

  27. 2. The array/buffer is in system space. • Suppose buffer input takes this form: • while(more to read) • { • sample[i] = getNextChar(); • i++; • }

  28. There’s no natural boundary on what the user might submit into the buffer. • The input could end up trashing/replacing data/code in the system memory space. • This is a big vulnerability. • The book outlines two common ways that attackers can take advantage of it.

  29. Attack 1: On the System Code • Given knowledge of the relative position of the buffer and system code in memory • The buffer is overflowed to replace valid system code with something else • A primitive attack would just kill the system code, causing a system crash

  30. A sophisticated attack would replace valid system code with altered system code • The altered code may consist of correct code with additions or modifications • The modifications could have any effect desired by the attacker, since they will run as system code

  31. The classic version of this attack would modify the system code so that it granted higher level (administrator) privileges to a user process • Game over—the attacker has just succeeded in completely hijacking the system and at this point can do anything else desired

  32. 2. Attack 2: On the Stack • Given knowledge of the relative position of the buffer and the system stack • The buffer is overflowed to replace valid values in the stack with something else • Again, a primitive attack would just cause a system crash

  33. A more sophisticated attack would change either the calling address or the return address of one of the procedure calls on the stack. • It’s also possible that false code would be loaded • Changing the addresses changes the execution path • This makes it possible to run false code under system privileges

  34. The book refers to a paper giving details on this kind of attack • If you had a close system, you could experiment with things like this • The book says a day or two’s worth of analysis would be sufficient to craft such an attack

  35. Do not try anything like this over the Web unless you have an unrequited desire to share a same-sex room-mate in a federal facility • The above comment explains why this course is limited in detail and not so much fun • All of the fun stuff is illegal • There are plenty of resources on the Internet for the curious, but “legitimate” sources, like textbooks, have to be cautious in what they tell

  36. A General Illustration of the Idea • Parameter passing on the Web illustrates buffer overflows • Web servers accept parameter lists in URL format • The different parameters are parsed and copied into their respective buffers/variables • A user can cause an overflow if the receiver wasn’t coded to prevent it.

  37. Essentially, buffer overflows have existed from the dawn of programming • In the good old, innocent days they were just an obscure nuisance known only to programmers • In the evil present, they are much more. • The form the basis for attacks where the goal of the attack is as varied as the attacker.

  38. Incomplete Mediation • Technically, incomplete mediation means that data is exposed somewhere in the pathway between submission and acceptance • The ultimate problem is the successful submission and acceptance of bad data • The cause of the problem is the break, or lack of security in the pathway

  39. The book uses the same kind of scenario used to illustrate buffer overflow • Suppose a form in a browser takes in dates and phone numbers • These are forwarded to a Web server in the form of a URL

  40. The developer may put data validation checks into the client side code • However, the URL can be edited or a fake URL can be generated and forwarded to the server • This thwarts the validation checks and any security they were supposed to provide

  41. What Can Go Wrong? • If the developer put the validation checks into the browser code, most likely the server code doesn’t contain checks. • Parameters of the wrong data type or with out of range values can have bad effects • They may cause the server code to generate bad results • They may also cause the server code to crash

  42. An Example from the Book • The book’s example shows a more insidious kind of problem • A company built an e-commerce site where the code on the browser side showed the customer the price • That code also forwarded the price back to the server for processing

  43. The code was exposed in a URL and could be edited • “Customers” (a.k.a., thieves) could have edited the price before submitting the online purchase • The obvious solution was to use the secure price on the server side and then show the customer the result

  44. There are several things to keep in mind in situations like this: • Is there a way of doing complete mediation? • I.e., can data/parameters be protected when they are “in the pathway”? • If not, can complete validation checking be done in the receiving code? • In light of the example, you might also ask, is there a way of keeping all of the vital data and code limited to the server side where it is simply inaccessible?

  45. Time-of-Check to Time-of-Use Errors • This has the weird acronym of TOCTTOU

  46. In a certain sense, TOCTTOU problems are just a special kind of mediation problem • They arise in a communication or exchange between two parties • By definition, the exchange takes place sequentially, over the course of time • If the exchange involves the granting of access permission, for example, security problems can result

  47. Example • Suppose data file access requests are submitted by a requester to a granter in this form: • Requester id + file id • Suppose that the access management system appends an approval indicator, granting access, and the request is stored for future servicing

  48. The key question is where it’s stored • Is it stored in a secure, system-managed queue? • If so, no problem should result • Or is it given back to, or stored in user space? • If so, then it is exposed and the user may edit it • It would be possible to change the requester id, the file id, or both between the time of check and the time of use

More Related