1 / 46

Securing Programs: Understanding Malicious Code and Programming Errors

This chapter explores the concept of a secure program, differentiates between malicious and non-malicious code, identifies and describes programming errors with security implications, and discusses various types of viruses and their impact on computing systems. It also explains virus signatures and examines policies, procedures, and technical controls against virus threats.

gabrielk
Download Presentation

Securing Programs: Understanding Malicious Code and Programming Errors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CHAPTER 3 Program Security

  2. Objectives • Defined the concept of secured program • differentiate malicious and non-malicious code • identify and describe programming errors with security implication • list and explain different types of viruses, how and where it attack and how it gain controls • explain virus signature • identify the impact of viruses to the computing system • discuss and explain various policies, procedures and technical controls against virus threats

  3. Secure Program • Security implies some degree of trust that the program enforces expected confidentiality, integrity and availability. • The meaning of secure software is likely to get difference answer from different people. • This difference occurs because the importance of the characteristics depends on who is analyzing the software. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  4. Fixing Faults • One approach of judging quality in security has been fixing faults. • Early work in computer security was based on the paradigm of “penetrate and patch” in which analysts searched for and repaired faults. • The test was considered to be “proof” of security; if the system withstood the attacks, it was considered secure. • Patch effort were largely useless, making the system less secure rather than more secure because they frequently introduced new faults. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  5. Fixing Faults (cont) • Why : • The pressure to repair a specific problem encouraged a narrow focus on the fault itself and not on its context. • The fault often had nonobvious side effects in places other than the immediate area of the fault. • The fault could not be fixed properly because system functionality or performance would suffer as a consequence. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  6. Unexpected Behavior • The inadequacies of penetrate-and-patch led researchers to seek a better way to be confident that code meets its security requirements. • Compare the requirements with the behavior – designer intended or users expected. • We call such unexpected behavior a program security flaw. • A program security flaw is an undesired program behavior caused by a program vulnerability (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  7. Types of Flaws • In the taxonomy, the inadvertent flaws fall into six categories: • Validation error (incomplete or inconsistence). • Domain error. • Serialization and aliasing. • Inadequate identification and authentication. • Boundary condition violation. • Other exploitable logic error. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  8. Nonmalicious Program Errors • Human make many mistakes, most of which are unintentional and nonmalicious. • Many such errors cause program malfunction but do not lead to more serious security vulnerabilities. • 3 main concern : • Buffer Overflows • Incomplete Mediation • Time-of-Check to Time-of-Use Errors (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  9. Buffer Flows • buffer (or array or string) is a space in which data can be held. • Because memory is finite, a buffer’s capacity is finite. • For this reason in many programming languages, the programmer must declare the buffer’s maximum size so that the compiler can set aside the amount of space. • Buffer overflow: when user input exceeds max buffer size • Extra input goes into unexpected memory locations • Attacker can run desired code, hijack program (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  10. Malicious user enters > 1024 chars, but buf can only store 1024 chars; extra chars overflow buffer void get_input() { char buf[1024]; gets(buf); } void main(int argc, char*argv[]){ get_input(); } (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  11. Buffer Flows (cont) • Example 1 : • Declare : char sample[10]; • Run : sample[10] = ‘A’; • Error : Subscript is out of bounds. • Example 2 : • Declare : sample[ i ] = ‘A’ • Run : for ( i=0; i<=9; i++ ) sample[ i ] sample[10] = ‘B’ • Error : Overwrites an existing variable value. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  12. Buffer may overflow into (and change): • User’s own data structures • User’s program code • System data structures • System program code (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  13. Incomplete Mediation • Incomplete Mediation : routine’s failing on a data type error. • Another possibility is that the receiving program would continue to execute but would generate the very wrong result. • One way to address the potential problems is to anticipate them – written code to check for correctness on the client’s side, program can restrict choice only to a valid ones. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  14. Incomplete Mediation (cont) • Example 1 : • Declare : int number; • Run : number = “two”; • Error : Wrong value in specified format • Example 2 : • Declare : in database we declare name length as 10 character • Run : We enter name = Christopher Columbus • Error : Database error (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  15. Time-of-Check to Time-of-Use Errors • Definition : instruction that appear to be adjacent may not actually be executed immediately after each other, either because of intentionally changed order or because of the effects of other processes in concurrent execution. • (A delay between checking permission to perform certain operations and using this permission may enable the operations to be changed) (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  16. Example: • 1.User attempts to write 100 bytes at end of file “abc”. • Description of operation is stored in a data structure. • 2.OS checks user’s permissions on copy of data structure. • 3.While user’s permissions are being checked, user changes data structure to describe operation to delete file “xyz”. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  17. Time-of-Check to Time-of-Use Errors (cont) • analogy: • To understand the nature of this flaw, consider a person’s buying a sculpture that cost RM100. The buyer remove five RM20 bill from a wallet, carefully counts them in front of the seller, and lays them on the table. When the seller turn around to write the receipt, the buyer takes back one RM20 bill. Then take the receipt and leaves with the sculpture. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  18. Viruses and Other Malicious Code • By themselves, program are seldom security threats. • The program operate on data, taking action only when data and state changes trigger it. • Much of the work done by a program is invisible to the user, so they are not likely to be aware of any malicious activity. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  19. Why Worry About Malicious Code • Malicious code can do much harm. • Writing a message on a computer screen, stopping a running program, generating a sound or erasing a stored files. • Malicious code has been around a long time. • Malicious code is still around and its effects are more pervasive. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  20. Kind of Malicious Code (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  21. How Viruses Attach • Appended viruses. • A program virus attaches itself to a program; then, whenever the program run, the virus activated. Easy to program. • Viruses that surround a program. • Virus that runs the original program but has control before and after its execution. • Integrated viruses and replacement. • Integrating itself into the original code of the target. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  22. Original Program Virus Code Virus Code + = Original Program Viruses appended to a program How Viruses Attach (cont) (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  23. Logically Virus Code Virus Code Part (a) Original Program Original Program Physically Virus Code Part (b) Viruses surrounding a program How Viruses Attach (cont) (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  24. Original Program Virus Code Modified Program + = Viruses integrated into a program How Viruses Attach (cont) (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  25. How Viruses Gain Control (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  26. Homes for Viruses • The virus writer may find these qualities appealing in a virus : • It is hard to detect. • It is not easily destroyed or deactivated. • It spreads infection widely. • It can re-infect its home program or the other programs. • It is easy to create. • It is machine independent and OS independent. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  27. Issue of viral residence • One-Time Execution • The majority of viruses today execute only once, spreading their infection and causing their effect in that one execution • Boot Sector Viruses • The virus gains control very early in the boot process, before most detection tools are active, so that it can avoid, or at least complicate, detection

  28. Memory-Resident Viruses • Resident routines are sometimes called TSRs or "terminate and stay resident" routines. • Virus writers also like to attach viruses to resident code because the resident code is activated many times while the machine is running. Each time the resident code runs, the virus does too. Once activated, the virus can look for and infect uninfected carriers • Other Homes for Viruses • One popular home for a virus is an application program. Many applications, such as word processors and spreadsheets, have a "macro" feature, by which a user can record a series of commands and repeat them with one invocation (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  29. Virus Signatures • A virus cannot be completely invisible. Code must be stored somewhere, and the code must be in memory to execute. • Each of these characteristics yields a telltale pattern, called a signature . The virus's signature is important for creating a program, called a virus scanner , that can detect and, in some cases, remove viruses.

  30. Viruses • The Brain Virus. • The Internet Worm. • Code Red. • Web Bugs. Targeted Malicious Code • Trapdoors. • Salami Attack. • Rootkits and the Sony XCP • Privilege Escalation • Interface Illusions • Keystroke Logging • Man-in-the-Middle Attacks • Timing Attacks • Covert channel (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  31. Trapdoor • A trapdoor is an undocumented entry point to a module. • Developers insert trapdoors during code development, perhaps to test the module, to provide "hooks" by which to connect future modifications or enhancements, or to allow access if the module should fail in the future • Causes of Trapdoors • forget to remove them • intentionally leave them in the program for testing • intentionally leave them in the program for maintenance of the finished program, or • intentionally leave them in the program as a covert means of access to the component after it becomes an accepted part of a production system (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  32. Salami attack • salami attack merges bits of seemingly inconsequential data to yield powerful results • Example: • small amounts are shaved from each computation and accumulated else where such as in the programmer's bank account • Missing ½ cent • Missing percentage • Taking a bit from a bunch • Charging higher fees • Why do they happen? • Sometimes programmers just except small errors • Code many times it to large to look for salami type errors (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  33. Rootkits • rootkit is a piece of malicious code that goes to great lengths not to be discovered or, if discovered and removed, to reestablish itself whenever possible. • Example: • Intercepts commands in order to keep itself hidden • if a directory contains six files, one of which is the rootkit, the rootkit will pass the directory command to the operating system, intercept the result, delete the listing for itself, and display to the user only the five other files • XCP rootkit prevents a user from copying a music CD, while allowing the CD to be played as music. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  34. Others • Privilege Escalation-Attack is a means for malicious code to be launched by a user with lower privileges but run with higher privileges • Interface Illusions - spoofing an attack in which all or part of a web page is false • Example: • to enter personal banking information on a site that is not the bank's, • to click yes on a button that actually means no • to scroll the screen to activate an event that causes malicious software to be installed on the victim's machine. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  35. Keystroke Logging – keeps a copy of everything pressed • A keystroke logger can be independent (retaining a log of every key pressed) or it can be tied to a certain program, retaining data only when a particular program (such as a banking application) runs. • Man-in-the-Middle Attack- Malicious program exists between tow programs • Example: • a program that operated between your word processor and the file system, so that each time you thought you were saving your file, the middle program prevented that, or scrambled your text or encrypted your file • Timing Attack – identify how fast something happens (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  36. Covert Channels: Programs That Leak Information • programs that communicate information to people who should not receive it • Unnoticed communication and accompanies other information (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  37. Data written to a drive, sent across a network, placed in a file or printout • Storage Channel – passes information based on presence or non-presence of data • File lock Channel – lock or non-lock of file • Timing Channels – varying speed in system or not using assigned computational time (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  38. Control Against Program Threats • There are many ways a program can fail and many ways to turn the underlying faults into security failures. • In this matters, we will look at three types of controls : • Developmental. • Operating System. • Administrative. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  39. Developmental Controls • The nature of software development. • Development requires people who can do all tasks involved in developing a system. • Specify the system • Design the system • Implement the system • Test the system • Review the system at various stages • Document the system • Manage the system • Maintain the systems • Typically it is not one person that does all of these (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  40. Modularity, encapsulation and information hiding. • Modularization : process of dividing a task into subtasks. • Encapsulation : hides a component’s implementation details, but does not necessarily mean complete isolation. • Information hiding : think of a component as a black box, will certain well defined inputs and outputs and well defined function. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  41. Developmental Controls (cont) • Peer reviews. • Review- presented formally • Walk-Through – creator leads and controls the discussion • Inspection – formal detailed analysis • Finding a fault and dealing with it: • By learning how, when, and why errors occur • By taking action to prevent mistakes • By scrutinizing products to find the instances and effects of errors that were missed. • Review, walk-through, inspection. • Hazard analysis. • Set of systematic techniques intended to expose potentially hazardous system state. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  42. Testing. • Activity that homes in on product quality : making the product failure free or failure tolerant. • Good design. • Design should try to anticipate faults and handle them in ways that minimize disruption and maximize security. • Using a philosophy of fault tolerance • Having a consistent policy for handling failures • Capturing the design rationale and history • Using design patterns (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  43. Developmental Controls (cont) • Prediction. • Important because we are always dealing with unwanted events that have negative consequences. • Static Analysis. • Examine several aspects of the design : control flow structure, data flow structure and data structure. • Configuration Management. • Process by which we control changes during development and maintenance. • Lesson From Mistake. • Document our decisions. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  44. Operating System Controls • Trusted software. • Software that we know that the code has been rigorously developed and analyzed. • Mutual suspicion. • each program operates as if other routines in the system were malicious or incorrect • Access log. • Listing of who accessed which computer objects, when and for what amount of time. • Confinement. • Strictly limited in what system resources it can access. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

  45. Administrative Controls • Standards of program development. • Guarantees program correctness, quality and security. • Separation of duties. • People are less tempted to do wrong if they concentrate in their job / task. (c) by Syed Ardi Syed Yahya Kamal, UTM 2004

More Related