security in computing chapter 3 program security n.
Skip this Video
Download Presentation
Security in Computing Chapter 3, Program Security

Loading in 2 Seconds...

play fullscreen
1 / 218

Security in Computing Chapter 3, Program Security - PowerPoint PPT Presentation

  • Uploaded on

Security in Computing Chapter 3, Program Security. Summary created by Kirk Scott. 3.1 Secure Programs 3.2 Non-Malicious Program Errors 3.3 Viruses and Other Malicious Code 3.4 Targeted Malicious Code 3.5 Controls Against Program Threats 3.6 Summary of Program Threats and Controls.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Security in Computing Chapter 3, Program Security' - clea

Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

3.1 Secure Programs

  • 3.2 Non-Malicious Program Errors
  • 3.3 Viruses and Other Malicious Code
  • 3.4 Targeted Malicious Code
  • 3.5 Controls Against Program Threats
  • 3.6 Summary of Program Threats and Controls

Treatment of topics will be selective.

  • You’re responsible for everything, but I will only be presenting things that I think merit some discussion
  • You should think of this chapter and the following ones as possible sources for presentation topics
3 1 secure programs
3.1 Secure Programs
  • In general, what does it mean for a program to be secure?
  • It supports/has:
  • Confidentiality
  • Integrity
  • Availability

How are confidentiality, integrity, and availability measured?

  • Given the ability to measure, what is a “passing” score?
  • Conformance with formal software specifications?
  • Some match-up with “fitness for purpose”?
  • Some absolute measure?

“Fitness for purpose” is the “winner” of the previous list

  • But it is still undefined
  • What is the value of software, and what constitutes adequate protection?
  • How do you know that it is good enough, i.e., fit for the purpose?
  • (This will remain an unanswered question, but that won’t stop us from forging ahead.)
code quality finding and fixing faults
Code Quality—Finding and Fixing Faults
  • When measuring code quality, whether quality in general or quality for security:
  • Empirically, the more faults found, the more there are yet to be found
  • Faults found tends to be a negative, not a positive indicator of code quality

Find and fix is a bad model generally:

  • You may not be looking for, and therefore not find, serious faults
  • Even if you do, you are condemned to trying to fix the code after the fact
  • After the fact fixes, or patches, tend to have bad characteristics

Patches focus on the immediate problem, ignoring its context and overall meaning

  • There is a tendency to fix in one spot, not everywhere that this fault or type of fault occurs
  • Patches frequently have non-obvious side-effects elsewhere
  • Patches often cause another fault or failure elsewhere
  • Frequently, patching can’t be accomplished without affecting functionality or performance
what s the alternative to find and fix
What’s the Alternative to Find and Fix?
  • Security has to be a concern from the start in software development
  • Security has to be designed into a system
  • Such software won’t go down the road of find and fix
  • The question remains of how to accomplish this

The book presents some terminology for talking about software security

  • This goes back to something familiar to programmers
  • Program bugs are of three general kinds:
  • Misunderstanding the problem
  • Faulty program logic
  • Syntax errors
ieee terminology
IEEE Terminology
  • Human error can cause any one or more of these things
  • In IEEE quality terminology, these are known as faults
  • Faults are an internal, developer-oriented view of the design and implementation of security in software

The IEEE terminology also identifies software failures

  • These are departures from required behavior
  • In effect, these are run-time manifestations of faults
  • They may actually be discovered during walk-throughs rather than at run time

Note that specifications as well as implementations can be faulty

  • In particular, specifications may not adequately cover security requirements
  • Therefore, software may “fail” even though it’s in conformance with specifications
  • Failures are an external, user-oriented view of the design and implementation of security in software
book terminology
Book Terminology
  • The framework presented by the book, beginning in chapter 1, is based on this alternative terminology:
  • Vulnerability: This is defined as a weakness in a system that can be exploited for harm.
  • This seems roughly analogous to a fault.
  • It is more general than three types of programming errors
  • However, it’s more specific to security
  • It concerns something that is internal to the system

Program Security Flaw: This is defined as inappropriate program behavior caused by a vulnerability.

  • This seems roughly analogous to a failure.
  • However, this inappropriate behavior in and of itself may not constitute a security breach.
  • It is something that could be exploited.
  • It concerns something that could be evident or taken advantage of externally.
the interplay between internal and external
The Interplay between Internal and External
  • Both the internal and external perspectives are important
  • Evident behavior problems give a sign that something inside has to be fixed
  • However, some faults may cause bad behavior which isn’t obvious, rarely occurs, isn’t noticed, or isn’t recognized to be bad
  • The developer has to foresee things on the internal side as well as react to things on the external side
classification of faults vulnerabilities
Classification of Faults/Vulnerabilities
  • Intentional: A bad actor may intentionally introduce faulty code into a software system
  • Unintentional: More commonly, developers write problematic code unintentionally
  • The code has a security vulnerability and attackers find a way of taking advantage of it
challenges to writing secure code
Challenges to Writing Secure Code
  • The size and complexity of code is a challenge
  • Size alone increases the number of possible points of vulnerability
  • The interaction of multiple pieces of code leads to many more possible vulnerabilities
  • Specifications are focused on functional requirements:
  • What the code is supposed to do

It is essentially impossible to list and test all of the things that code should not allow.

  • This leaves lots of room both for honest mistakes and bad actors

Changing technology is also both a boon and a bane

  • The battle of keeping up is no less difficult in security as in other areas of computing
  • Time is spent putting out today’s fires with today’s technologies while tomorrow’s are developing
  • On the other hand, some of tomorrow’s technologies will help with security as well as being sources of new concerns
six kinds of unintentional flaws
Six Kinds of Unintentional Flaws
  • Intentionally introduced malicious code will be covered later.
  • Here is a classification of 6 broad categories of unintentional flaws in software security:
  • 1. Identification and authorization errors (hopefully self-explanatory)
  • 2. Validation errors—incomplete or inconsistent checks of permissions

3. Domain errors—errors in controlling access to data

  • 4. Boundary condition errors—errors on the first or last case in software
  • 5. Serialization and aliasing errors—errors in program flow order
  • 6. General logic errors—any other exploitable problem in the logic of software design
3 2 non malicious program errors
3.2 Non-Malicious Program Errors
  • There are three broad classes of non-malicious errors that have security effects:
  • 1. Buffer overflows
  • 2. Incomplete mediation
  • 3. Time-of-check to time-of-use errors
buffer overflows
Buffer Overflows
  • The simple idea of an overflow can be illustrated with and out of bounds array access
  • In general, in a language like C, the following is possible:
  • char sample[10];
  • sample[10] = ‘B’;
  • Similar undesirable things can be even more easily and less obviously accomplished when using pointers (addresses) to access memory
cases to consider
Cases to Consider
  • 1. The array/buffer is in user space.
  • A. The out of bounds access only steps on user space.
  • It may or may not trash user data/code, causing problems for that process.
  • B. The 10th position in the array would be outside of the process’s allocation.
  • The O/S should kill the process for violating memory restrictions.

2. The array/buffer is in system space.

  • Suppose buffer input takes this form:
  • while(more to read)
  • {
  • sample[i] = getNextChar();
  • i++;
  • }

There’s no natural boundary on what the user might submit into the buffer.

  • The input could end up trashing/replacing data/code in the system memory space.
  • This is a big vulnerability.
  • The book outlines two common ways that attackers can take advantage of it.
attack 1 on the system code
Attack 1: On the System Code
  • Given knowledge of the relative position of the buffer and system code in memory
  • The buffer is overflowed to replace valid system code with something else
  • A primitive attack would just kill the system code, causing a system crash

A sophisticated attack would replace valid system code with altered system code

  • The altered code may consist of correct code with additions or modifications
  • The modifications could have any effect desired by the attacker, since they will run as system code

The classic version of this attack would modify the system code so that it granted higher level (administrator) privileges to a user process

  • Game over—the attacker has just succeeded in completely hijacking the system and at this point can do anything else desired
2 attack 2 on the stack
2. Attack 2: On the Stack
  • Given knowledge of the relative position of the buffer and the system stack
  • The buffer is overflowed to replace valid values in the stack with something else
  • Again, a primitive attack would just cause a system crash

A more sophisticated attack would change either the calling address or the return address of one of the procedure calls on the stack.

  • It’s also possible that false code would be loaded
  • Changing the addresses changes the execution path
  • This makes it possible to run false code under system privileges

The book refers to a paper giving details on this kind of attack

  • If you had a close system, you could experiment with things like this
  • The book says a day or two’s worth of analysis would be sufficient to craft such an attack

Do not try anything like this over the Web unless you have an unrequited desire to share a same-sex room-mate in a federal facility

  • The above comment explains why this course is limited in detail and not so much fun
  • All of the fun stuff is illegal
  • There are plenty of resources on the Internet for the curious, but “legitimate” sources, like textbooks, have to be cautious in what they tell
a general illustration of the idea
A General Illustration of the Idea
  • Parameter passing on the Web illustrates buffer overflows
  • Web servers accept parameter lists in URL format
  • The different parameters are parsed and copied into their respective buffers/variables
  • A user can cause an overflow if the receiver wasn’t coded to prevent it.

Essentially, buffer overflows have existed from the dawn of programming

  • In the good old, innocent days they were just an obscure nuisance known only to programmers
  • In the evil present, they are much more.
  • The form the basis for attacks where the goal of the attack is as varied as the attacker.
incomplete mediation
Incomplete Mediation
  • Technically, incomplete mediation means that data is exposed somewhere in the pathway between submission and acceptance
  • The ultimate problem is the successful submission and acceptance of bad data
  • The cause of the problem is the break, or lack of security in the pathway

The book uses the same kind of scenario used to illustrate buffer overflow

  • Suppose a form in a browser takes in dates and phone numbers
  • These are forwarded to a Web server in the form of a URL

The developer may put data validation checks into the client side code

  • However, the URL can be edited or a fake URL can be generated and forwarded to the server
  • This thwarts the validation checks and any security they were supposed to provide
what can go wrong
What Can Go Wrong?
  • If the developer put the validation checks into the browser code, most likely the server code doesn’t contain checks.
  • Parameters of the wrong data type or with out of range values can have bad effects
  • They may cause the server code to generate bad results
  • They may also cause the server code to crash
an example from the book
An Example from the Book
  • The book’s example shows a more insidious kind of problem
  • A company built an e-commerce site where the code on the browser side showed the customer the price
  • That code also forwarded the price back to the server for processing

The code was exposed in a URL and could be edited

  • “Customers” (a.k.a., thieves) could have edited the price before submitting the online purchase
  • The obvious solution was to use the secure price on the server side and then show the customer the result

There are several things to keep in mind in situations like this:

  • Is there a way of doing complete mediation?
  • I.e., can data/parameters be protected when they are “in the pathway”?
  • If not, can complete validation checking be done in the receiving code?
  • In light of the example, you might also ask, is there a way of keeping all of the vital data and code limited to the server side where it is simply inaccessible?
time of check to time of use errors
Time-of-Check to Time-of-Use Errors
  • This has the weird acronym of TOCTTOU

In a certain sense, TOCTTOU problems are just a special kind of mediation problem

  • They arise in a communication or exchange between two parties
  • By definition, the exchange takes place sequentially, over the course of time
  • If the exchange involves the granting of access permission, for example, security problems can result
  • Suppose data file access requests are submitted by a requester to a granter in this form:
  • Requester id + file id
  • Suppose that the access management system appends an approval indicator, granting access, and the request is stored for future servicing

The key question is where it’s stored

  • Is it stored in a secure, system-managed queue?
  • If so, no problem should result
  • Or is it given back to, or stored in user space?
  • If so, then it is exposed and the user may edit it
  • It would be possible to change the requester id, the file id, or both between the time of check and the time of use

This is identified as a timing, or synchronization, problem

  • This is true
  • One solution to the problem would be to disallow access by the client before the server has run to completion on the request

This is also a mediation problem

  • Something is stored insecurely “in between”, where user can modify it
  • One obvious solution is to move storage of all requests into system space
  • It would also be possible to apply a checksum so that modified requests could be detected
building blocks
Building Blocks
  • A complex attack might rely on more than one flaw
  • The book gives this sample outline:
  • 1. Use buffer overflow to stop a system in its tracks
  • 2. Use a time-of-check to time-of-use error to add a new user id
  • 3. Use incomplete mediation to obtain privileged status for the new id
  • Etc…
3 3 viruses and other malicious code
3.3 Viruses and Other Malicious Code
  • Kinds of malicious code
  • Note that this list classifies things in various ways
  • A given piece of code might be more than one thing on the list.
  • Note also that the authors eventually just settle on the term “virus” as the general descriptor for any malicious code that might infect a system

1. Virus:

  • This is a modification of or attachment to an executable program
  • It runs when the program runs
  • It spreads itself by modifying other executables

2. Trojan Horse:

  • This is code with a primary (useful) function plus a non-obvious malicious side effect
  • 3. Logic Bomb:
  • This is malicious code triggered by the existence of a certain condition
  • 4. Time Bomb:
  • This is malicious code triggered by a specific time or date

5. Trapdoor or Back Door:

  • This code allows non-standard or unauthorized access to a program, resource, etc.
  • It may be an intentional maintenance feature or it may be an introduced flaw

6. Worm:

  • This is a program that propagates copies of itself through a network.
  • Unlike viruses, these can be stand-alone copies as opposed to attachments or modifications of other executables
  • In the general discussions that follow, many of the characteristics ascribed to “viruses” are actually those of worms.

7. Rabbit

  • This is a virus or worm that reproduces without bound.
  • It consumes resources:
  • CPU cycles
  • Memory
  • Secondary storage
how viruses attach
How Viruses Attach
  • Appended (pre-pended) viruses:
  • Virus code is inserted at the beginning of the infected executable
  • After the virus code runs, the host runs, unchanged
  • The virus writer doesn’t have to know anything about the host
  • As long as the host runs, the user won’t immediately know that there is an infection

Surrounding viruses:

  • Part of the virus code is put at the beginning of the host, part at the end
  • A virus like this typically targets the host
  • The book gives the example of infecting a system program that generates file listings

The goal is to give a false listing for the infected program itself

  • An infected version of the program would be longer than an uninfected version
  • Therefore, the virus could conceal its presence
  • Notice that in this example, the virus is simply self-serving
  • To be “interesting” it would have to have other bad effects

Integrated viruses and replacements:

  • These are the most elaborate and extreme cases
  • The virus writer knows the target in detail and hijacks all or part of it
  • The goal may be to duplicate the original while running new code
  • More likely, no duplication is done, and its purely malicious code
virus worm propagation
Virus (Worm) Propagation
  • Any means of file transmission can propagate malicious code:
  • 1. Distribution of media with infected programs
  • 2. Executable email attachments
  • 3. “Document viruses”:
  • Keep in mind that modern “data” files frequently have formulas, formatting, links, etc. in them
  • Opening the document executes the code, so these are also liable to infection
an alternative to attachment modification
An Alternative to Attachment/Modification
  • Malicious code can also be caused to run by modifying the addresses in the file system.
  • When target program T is requested, the modified address points to the Virus code, V.
  • A similar thing can be done with resident code
  • If an attacker can modify the IVT, havoc can be wreaked
  • Interrupt handling can be directed to another memory address where fake code has been loaded
desirable properties from the virus writer s point of view
Desirable Properties from the Virus Writer’s Point of View
  • Easy to write
  • Machine and O/S independent
  • Spreads infection widely
  • Hard to detect
  • Hard to destroy or deactivate
  • Able to re-infect home system/program
homes for viruses
Homes for Viruses
  • The “original”—the boot sector
  • This is somewhat analogous to a pre-pended virus in a program
  • The virus code (or a jump) is placed at offset 0 in the boot sector
  • After running, control is transferred to the bootstrap program

This is a great location for a virus

  • The O/S typically doesn’t show system locations in listings for users
  • In essence, the infection begins before any detection tool can come into operation
  • From the boot sector, the code may be written to migrate to other programs in the system

Attaching to a resident program

  • This is also a powerful thing compared to being attached to just any program
  • A resident program will tend to be executed early and often
  • By virtue of the fact that it is resident, it is likely to be system code itself
  • It can readily spread the infection to other programs when they run

In a networked environment the need to have malicious code execute more than once receives less emphasis

  • Getting the code to run once and then propagate through email is sufficient to spread the infection widely
virus signatures
Virus Signatures
  • This is a simple idea which explains what virus scanners are doing
  • Viruses contain lines of code which are unique to them
  • Scanning consists of searching memory and secondary storage, byte-by-byte, looking for programs/files containing these lines of code
storage patterns
Storage Patterns
  • Scanners can also look for telltale signs of modification
  • 1. Standard (system) files which are not the right size
  • Note that in theory, assuming a correct file size is known, a virus is always detectable
  • Either the file size is off
  • Or the functionality of the program is off

2. Another pattern is the presence of a jump at the beginning of code

  • 3. In a more involved protection system, checksums would be associated with executables, and a checksum that was off would indicate a file that had been modified
polymorphic viruses
Polymorphic Viruses
  • Viruses can be written so that they reorganize themselves when they propagate
  • For example, they might break themselves into pieces connected by jumps through the host code
  • This may or may not thwart a scanner
  • In the long run, traces of the virus signature will remain

Viruses have also been encrypted

  • Along with the appended, encrypted code, there has to be a decryption routine and a key
  • In this case, the decryption of the routine becomes the signature indicating that the file is infected
execution patterns
Execution Patterns
  • In general, infected programs simply do what valid programs do
  • Compute, read, write, execute, etc.
  • There is no single telltale sign that a given executable is doing something malicious.
comparing program functionality
Comparing Program Functionality
  • Unfortunately, trying to decide whether two pieces of code are equivalent (do the same thing) is a theoretically undecidable problem.
  • There may not be a fixed byte size for a suite of system software.
  • Comparing it with a known, good copy, alone won’t tell you whether it’s infected.
  • On the other hand, knowing that an infection can occur, it can be checked for and detected
next topics
Next Topics
  • Preventing Infection
  • Deactivating/Removing (Quarantine)
  • Note that removal depends on the ability to detect and remove faster than the infection grows
  • The book doesn’t go into this in detail
  • Removing infected files or editing infected lines of code is a clear approach
prevention safe computing communities
Prevention: Safe Computing Communities
  • Secure computing tends to result from like-minded users aggregating (or segregating) into communities
  • This is not so easy in a fully interconnected Web environment, but general good practices can be identified

1. Use software from reliable commercial vendors

  • 2. Test software from a less reliable source on an isolated machine
  • 3. Only open “safe” attachments (define as you wish)
  • 4. Make a recoverable system image
  • 5. Make backup copies of all files that could be corrupted
  • 6. Use security software and keep it updated
  • 1. Viruses can affect any device with software—none are exempt
  • 2. Viruses can potentially modify any file
  • 3. Viruses can do this work from any file that has any kind of executable component that is activated (run)
  • 4. Viruses can spread in any way that computer files are transmitted

5. TSR viruses exit, but in general, viruses can persist through system shutdowns, simply reloading on restart

  • 6. Viruses cannot literally infect hardware, but they can infect device controllers and firmware
  • 7. It is possible to write a virus without bad effects; however, it is still undesirable to have uninvited software running on a system burning up CPU cycles
examples of malicious code
Examples of Malicious Code
  • 1. The Brain virus
  • 2. The Internet Worm
  • 3. Code Red
  • 4. Web Bugs
  • This is the classic virus
  • It was created quite a long time ago
  • It illustrates several of the characteristics of viruses mentioned already
  • It is basically harmless
  • It apparently served as a model for other virus writers
what it does
What It Does
  • The Brain virus loads its code into upper memory on a PC
  • It executes a system call to relocate the upper memory boundary lower so its code will remain undisturbed, like system code
  • It changes the address value for interrupt 19, disk read, to run the virus code instead of the legitimate interrupt handling code

It changes the address value for interrupt 6, unused, to the address of the legitimate disk read interrupt handling code

  • In this way it intercepts disk reads and has the ability to trigger a legitimate read when necessary
how it spreads
How It Spreads
  • Intercepting disk reads is related to how it hides and spreads
  • The running code goes into upper memory
  • The virus source code is stored in the boot sector and five other sectors of the disk drive
  • These sectors are chained together, leading to a sixth sector, which contains the legitimate boot program

The virus marks the sectors it uses “faulty” so that the O/S doesn’t attempt to use them

  • Special system calls allow the virus to access the “faulty” sectors
  • If the virus intercepts a disk read to these sectors, it passes back an image of what should be there, namely the boot program and the legitimate contents of the sectors which follow it
  • This conceals the presence of the virus

If other disks are inserted into drives on the system, the virus intercepts reads to them

  • It checks those drives to see if its signature is in the boot sector
  • If not, and if it’s a potentially bootable disk, it infects that disk by placing a copy of its code in the boot sector
what was learned
What Was Learned?
  • Ultimately, there’s no cure for viruses
  • You can scan for known viruses and try to eliminate them
  • New viruses with unknown signatures will continue to use techniques similar to those of the Brain virus and more recent viruses to get a foothold and infect systems
how it worked
How It Worked
  • It had three components:
  • 1. Determine where to spread to
  • 2. Spread
  • 3. Remain undiscovered
  • Overall, it was relatively sophisticated
  • The components had subcomponents
1 determining where to spread to
1. Determining Where to Spread To
  • This consisted of three parts:
  • i. Find user accounts to exploit
  • ii. Use a buffer overflow to execute illegitimate instructions
  • iii. Use a trapdoor to execute commands
  • All three of these things were based on known security vulnerabilities in Unix at the time
finding accounts to exploit
Finding Accounts to Exploit
  • The password file in Unix was encrypted
  • However, ciphertext in the file was open to inspection
  • The Worm encrypted common passwords and dictionary entries and looked for matches in the encrypted password file
  • Finding a match allowed logging in to the corresponding account
using a buffer overflow
Using a Buffer Overflow
  • A program named fingerd, for finding information about users (fingering users) was known to be subject to overflow
  • The Worm used overflow to place code into the return address stack
  • This code made it possible to initiate a remote shell, a connection to another Unix machine on the Internet
using a trapdoor
Using a Trapdoor
  • The Unix mail program, sendmail, had a debugging mode which was effectively an intentional trapdoor
  • In debugging mode, a general Unix command could be submitted to it rather than an email destination address
  • It would execute the command (rather than processing an address)
2 spread the infection
2. Spread the Infection
  • The general plan is clear: Get a login, connect to another machine, execute commands
  • The Worm was divided into a bootstrap loader and the code itself
  • Using the flaws above, the loader would be sent to a remote machine and run there
  • It would retrieve the rest of the virus code
defensive security in the worm
Defensive Security in the Worm
  • Why use a loader?
  • A failed transmission wouldn’t leave a copy of the worm code sitting on the target machine for inspection by an administrator
  • Furthermore, the loading process was password protected

The loader had to send a password back to the source machine before the virus code would be transmitted

  • This was apparently intended to prevent administrators from devising a way to acquire the source code without getting an infection
3 remain undiscovered
3. Remain Undiscovered
  • On an incomplete or failed load the bootstrap loader deleted everything loaded so far and terminated itself
  • On a successful load, the code went into memory and was encrypted
  • All evidence in secondary storage was deleted
  • The Worm periodically changed its process name and id so that it wouldn’t betray itself by its continuous presence and accumulating usage statistics
what effect it had
What Effect It Had
  • The claim was made that the Worm was only supposed to infect a given machine once
  • Whether by mistake or design, it could infect a machine multiple times
  • As a result, it burnt up resources with no effective limite

Machines on the Internet went down

  • Administrators disconnected from the system to avoid spreading the Worm
  • Others disconnected to avoid contracting it
  • There was a serious disruption in computing and communications
  • It took a lot of effort to fix the problems
what was learned1
What Was Learned?
  • It was a wake-up call for the nascent Internet community
  • What had been an open environment with known vulnerabilities couldn’t continue in that way
  • Still, similar vulnerabilities continue to exist

The Worm was relatively benign in this sense:

  • It used stolen passwords once, for its purposes, but it didn’t steal them long-term for any purpose other than propagation
  • Although it disabled a lot of machines and the network, it didn’t destroy data or do permanent damage

People became aware that a truly damaging attack could be mounted

  • The Worm was the impetus for creating CERT, the Computer Emergency Response Team, and other centers devoted to tracking vulnerabilities and dealing with Internet and other security problems when they occur
  • There’s now at least an infrastructure for addressing Internet security
iii code red 2001
III. Code Red (2001)
  • This is a grab bag of attacks rolled into one
  • It was essentially a worm with lots of enhancements or features
  • It infected a lot of computers very quickly
  • Some have speculated that it was a test run for cyber warfare
what it did
What It Did
  • It affected machines running MS Internet Information Server
  • It became memory resident by overflowing a buffer in the idq.dll dynamically linked library
  • It propagated itself by checking IP addresses on port 80 to find machines with vulnerable Web servers
what effect it had1
What Effect It Had
  • In various versions the code did the following:
  • 1. It defaced Web sites
  • 2. It launched a distributed denial of service attack on
  • 3. Incidentally, it had elements of a time bomb, because the actions it took were based on time

4. It opened a trapdoor by copying code into several server directories allowing command execution from those locations

  • 5. It replaced Internet Explorer with a copy containing a Trojan Horse, allowing a remote party to execute commands on a server
  • 6. It made system changes repeatedly, thwarting administrator attempts to undo what it had done
  • 7. It spawned multiple threads to search for other victims and propagate itself
what was learned2
What Was Learned?
  • Take your pick:
    • The same lessons that should already have been learned
    • Or: Nothing, just like last time around…
  • The overall lesson is the vulnerable software keeps being produced and vulnerable people keep on installing it and using it

Even when vulnerabilities are discovered, it is difficult to devote the time, energy, and resources to installing patches

  • As noted before, there are sound reasons for suspecting the utility of patches, but the alternative is not pretty
  • This cycle of vulnerability is likely to continue
  • Notice that Code Red, like the Internet Worm, did not do anything acutely harmful
  • However, it could have
iv web bugs ongoing
IV. Web Bugs (Ongoing)
  • This is a term used to describe one of the tools for tracking user activities on the Web
  • I’ve ranted about it before, and most students seem unmoved
  • I am not convinced that surveillance of my activities on the Web has been, is, or always will be benign
  • Everyone has their own experiences and can draw their own conclusions
  • Common sources of trapdoors:
  • Debugging code that allows insertion of commands or transactions into systems
  • This is an accepted debugging practice
  • This was the kind of vulnerability of the sendmail program that the Internet Worm exploited

Lack of error checking/input validation or poorly implemented checking can also lead to problems

  • Unexpected input values may have unexpected (or expected) results
among the reasons to hate the case switch statement
Among the reasons to hate the case/switch statement
  • The fingerd program that the Internet Worm exploited had this kind of characteristic:
  • It would check for various conditions on input, and if it didn’t find them, it would fall out of the case statement at the bottom
  • There was no final “error condition” case
  • This was its vulnerability
  • This kind of problem with case statements also cause the meltdown of the AT&T long distance switching system in 1990

Non-standard input causing “interesting” effects goes all the way to the machine code level

  • Some systems may have unused or undocumented operation codes that have some particular effect when executed
  • These may be left over from development or just some sort of coincidental oversight

The bottom line is that trapdoors may be intentionally included in software for security or other testing

  • They should be removed from production code
  • Or they should be documented and means should be included for controlling access to them: logins, passwords, etc.
  • Trapdoors may also be included intentionally by a malicious programmer
  • Whatever the reason for their existence, they are fruitful places to mount an attack
salami attack
Salami Attack
  • The classic example of this:
  • Financial software that has been written to accumulate round-off errors when calculating interest on many accounts, and add that to one account
  • This could also be done with fees or larger sums of money
  • Clearly, this kind of attack is an inside job
why salami attacks persist
Why Salami Attacks Persist
  • In situations dealing with money, a certain degree of error is expected
  • Code is written to accept and handle a certain degree of error
  • The code overall is probably too large to comprehensively audit for all possible salami attacks

Just like with regular embezzlement, the first clue of a problem might be the behavior of the employee involved (mink coats and Mercedes even on a programmer’s salary…)

  • When suspicion is aroused, then a targeted financial/software audit can be launched and find the salami attack if one exists
rootkits and the sony xcp became widely known in 2005
Rootkits and the Sony XCP (Became widely known in 2005)
  • A rootkit, in short, is a piece of code written to assume root, or administrator, capabilities
  • Like a well-written virus, it includes code to prevent its detection
  • It does this by intercepting commands that show file listings, for example, return false listings without itself included

Sony included rootkits in their music CD’s

  • These rootkits had these desirable characteristics from Sony’s point of view:
  • They allowed the CD’s to play
  • The intercepted commands to copy and disallowed them.
  • Strike 1: A vendor selling you a “product” which contains features and capabilities that you aren’t told about.

This is how the rootkit hid itself:

  • It would not show any file that started with the characters $sys$ in a file listing
  • This meant that any malicious code (more malicious than Sony’s?) would be hidden if it was given a name starting with these characters
  • Strike 2: The undocumented features that Sony “gave” its customers were a giant security hole

A computer security expert named Mark Russinovich developed a program that compared the output of a file listing program with the information about a system found by issuing system calls directly

  • To his surprise, he discovered the XCP rootkit on his machine and traced it to a Sony music CD
  • The rootkit became public knowledge

In order to avoid a backlash, Sony made an uninstaller available through a Web page

  • Unfortunately (or if you’re really paranoid, intentionally), the code set-up essentially allowed any code to be run on a system that used it
  • Furthermore, this dangerous uninstallation software remained on a system after the uninstallation was complete

Strike 3: From a security standpoint, the cure might have been worse than the disease

  • This illustrates the perils of patching
  • Sony was too busy focusing on removing the first problem to see the defects of the solution they were offering
  • Nobody knows how many machines were affected by both the disease and the cure, but the number had to be large

Are you allowed a strike 4?

  • Maybe Sony was the pitcher, not the batter, and this is the fourth ball, not the fourth strike:
  • Sony developed XCP in consultation with an anti-virus software vendor to help make sure it would remain undetected
  • Essentially, this was a large-scale industrial conspiracy where the customers were the unwitting victims
privilege escalation
Privilege Escalation
  • Summary: Malicious code would like to run as the administrator
  • Example: Symantec Live Update (2006)
  • The anti-virus live update ran with administrator privileges
  • During the course of running it called programs in the system path

Replace one of those programs with a malicious version

  • Or change the path statement to include a directory containing a malicious version
  • The malicious code, running on behalf of the code that called it, would have its level of privileges
interface illusions
Interface Illusions
  • Summary: This is a fancy way of saying that one thing represents itself as another
  • In other contexts it might be referred to as spoofing
  • Example: A fake Web page, a Web page with misleading or fake controls, with “unexpected” and possibly not obvious effects

Using digital tools it’s not hard to create one thing that looks exactly like another and appears to have the same functionality, but with different effects

  • This is a very broad problem
  • How is the user to know what’s real?
  • To what extent are we dependent on trust?
keystroke logging
Keystroke Logging
  • Summary: A single character takes a long path from keystroke to application program to output or storage
  • It can be intercepted and saved or changed at many points along the way
  • Example: A virus which becomes memory resident and is triggered by the interrupt generated by the keyboard
  • It tries to detect password prompts and saves the responses to them or sends them out to another party
man in the middle attacks
Man-in-the-Middle Attacks
  • Summary: Keystroke logging illustrates the idea in software form
  • It also came up in the discussion of intercepting (encrypted) messages
  • Any time interception is possible, confidentiality and integrity are breached
  • Modification and fabrication can follow

Note the similarity and difference between interface illusions and persons of indeterminate gender in the middle

  • In both cases there is an intruder
  • In one case, the intruder is out in the open masquerading as someone else
  • In the other case, the intruder remains unknown, skulking in the shadows
timing attacks
Timing Attacks
  • Summary: An attacker can infer knowledge about a system based on how long it takes to do things
  • This is sort of like signals intelligence, but one step removed
  • Example: The book illustrates with a cryptanalytic attack on RSA

The technical details are fuzzy, but the thumbnail idea provides a nice illustration of the idea

  • You are condemned to guessing the key
  • For guesses less than the key, as you increase the value, the longer the computation takes
  • Once you pass the key, the computation time drops

Don’t do brute force guessing of every value

  • Try every nth value
  • The computation time will rise and then drop
  • This means you’ve found the interval where the actual value has to be
  • Now try every mth value in that range, finding a new, smaller interval
  • Repeat until you’ve hit the key
  • This is reminiscent of numerical analysis techniques of approximation.
covert channels
Covert Channels
  • Suppose a programmer is writing code that will have access to confidential data
  • After the code is in production, the programmer shouldn’t have access
  • Access can be obtained by making the program a Trojan horse or including a trapdoor
the channel
The Channel
  • The obvious way to leak data would be to simply generate it and route it to the programmer
  • This should be detected by security audits or other means
  • There are other ways of transmitting the data without being obvious
  • One example would be by encoding data by modifying the format of a standard report

Another approach: A storage channel

  • Example: A file lock channel
  • Let a spy program be written to go with the corrupted service program
  • Have the corrupted program lock and unlock a given file at given intervals of time

Let the spy program try to lock the same file in the same intervals

  • The spy program will be able to detect whether the file has been locked
  • Locked might signal 1 and unlocked might signal 0

Another approach: A timing channel

  • Example: Use of CPU cycles in a time interval
  • The idea is analogous to the foregoing
  • The corrupted program leaks 1’s and 0’s according to whether it is using the CPU during a given time slice
  • The spy can tell this by requesting the CPU during the same time slice

The CPU is an unavoidable shared resource unless the spy simply isn’t allowed to run on the secured system where the service program runs

  • There are likely to multiple processes on a single system, not just the service program and the spy
  • Still, by lengthening the time intervals to accommodate time slices for other processes, the two could still be made to cooperate and leak
shared resources
Shared Resources
  • In both examples, the corrupted program and the spy program need access to a common clock
  • This is not hard
  • They also need access to a shared resource
  • In effect, this is where the leak occurs

In one case the shared resource is a file

  • In the other, the shared resource is the CPU itself
  • You can identify potential covert channels by creating a matrix of resources and processes

The spy process doesn’t have read (or write) access to the confidential resource

  • However, the service process does
  • The service process can read and signal the data through the shared resource
  • The idea is that depending on the values of the 3 boxes in the matrix complementary to the empty box, a channel may exist allowing a breach that would be indicated in the empty box

In a more elaborate setup, it would even be possible for the spy to modify the confidential data

  • Since the spy can modify the shared resource, it can signal the service program through the covert channel
  • All that’s missing in this example is modify access to the confidential resource by the service program
the information flow method
The Information Flow Method
  • It is possible to trace what variables influence other variables, whether directly or indirectly, within the logic of a single program
  • This allows an in-depth analysis of which parts of a program write to/read from other parts
  • This may be useful in determining how or what a program might leak
  • Compilers trace code in this way, so this analysis can be automated
covert channels could be anywhere
Covert Channels Could Be Anywhere
  • Potential covert channels exist in systems from the hardware to the user application level
  • In open environments, common utilities, such as print spoolers/drivers could be used as covert channels
  • This means that covert channels are hard to guard against
  • The potential threat is ubiquitous, assuming bad actor programmers

Another factor to consider is their potential speed

  • They might not be incredibly fast, but even 1 bit per millisecond (on a time slice, for example) adds up
  • They don’t rely on a single “smash and grab”
  • They can leak data over long periods of time
  • On the other hand, the authors claim that there is no documented case of a covert channel in the literature
in a similar vein
In a Similar Vein
  • Steganography, or encrypting data in picture files is sort of a crossover between cryptography and covert channels
  • According to the principles of cryptology, having an encrypted file that is longer or contains more information than the plaintext is not necessarily a good idea
  • This essentially consists of just hiding information

On the other hand, in a modern, high-speed, networked environment the places where data could be hidden are almost limitless

  • Checking everything to see whether it contained “other” information would be an overwhelming task
  • An email scanner is doing this, looking for virus signatures

The combined moral of the story seems to be this:

  • If you have some idea of what you’re looking for, you have a chance
  • If you have no idea what you’re looking for, you don’t have a chance
  • Once again, the application of technology relies on outside knowledge before it can be useful
3 5 controls against program threats
3.5 Controls Against Program Threats
  • Categories:
  • Developmental controls
  • O/S
  • Administrative
developmental controls
Developmental Controls
  • (A summary of software engineering principles)
  • Development tasks:
  • Specify
  • Design
  • Implement
  • Test


  • Document
  • Manage
  • Maintain

Development points directly related to security:

  • You can’t retrofit security
  • Tools aren’t solutions
  • Mind the upper layers(?)
  • Keep customers happy…
  • Think locally; act locally(?)
relevant programming principles
Relevant Programming Principles
  • Modularity
  • Encapsulation
  • Information hiding
  • Characteristics of well-designed modular components:
  • Single purpose
  • Small
  • Simple
  • independent

Positive outcomes of modularity:

  • Ease of maintenance
  • Understandability
  • Reusability
  • Correctness
  • Ability to test exhaustively

Overall object-oriented goal:

  • Tight cohesion within modules
  • Low or loose coupling between modules
  • Encapsulation and information hiding promote these goals
mutual suspicion
Mutual Suspicion
  • (There are personal relationship phone apps for this now.)
  • Write each module under the assumption that other modules aren’t correct
  • For example, when called, check data passed in
  • When calling, check data passed back
  • Etc.
  • (Another friendly concept.)
  • This is an O/S oriented approach to limiting potential damage
  • Let a set of programs and data be placed in a given directory with rwx access to that directory only
  • Then, at most, a faulty program can only affect the contents of that directory
  • Without something shared outside, there is no channel for harm to propagate elsewhere in the system
genetic diversity
Genetic Diversity
  • If all systems essentially have the same software set, they are all vulnerable to the same kinds of attacks or faults
  • Compare MS Windows on the Internet with agricultural monoculture.
  • Also, just how smart is cloning?
  • Back to the topic: Likewise, if software is tightly integrated problems with one component may migrate to another. (MS Office?)
process oriented steps to assure software security
Process-Oriented Steps to Assure Software Security
  • Peer Review
  • Hazard Analysis
  • Testing
  • “Good Design”
  • Prediction
  • Static Analysis
  • Configuration Management
  • Analysis of Mistakes
peer reviews
Peer Reviews
  • Remember, part of what you’re worried about is bad code put in by a malicious programmer
  • Peer review are probably the first and best defense against this
  • If each piece of code is going to be scrutinized, the malicious programmer will either have to form a conspiracy, be exceedingly clever, rely on a lax reviewer, or give up the attempt
  • Review is also the first and best tool for finding inadvertent flaws
types of peer reviews
Types of Peer Reviews
  • Review: An informal presentation by the programmer to keep other team members informed during the course of development
  • Walk-through: A more formal presentation by a developer to educate others about code
  • Inspection: A formal process, not led by the programmer, where the code is reviewed and studied by others for conformance to standards of correctness.
  • Statistical benchmarks or other quantitative measures might be used.
review body of programming knowledge
Review  Body of Programming Knowledge
  • Participants learn the who, what, where, when, how and why of software mistakes
  • They learn how to correct them and avoid or prevent them in the future
  • Research has shown that during the course of system development, the plurality of problems are found in reviews
hazard analysis
Hazard Analysis
  • These are engineering approaches that can be used in secure software development
  • Hazard and Operability Studies (HAZOP)
  • This is a structured analysis technique originally developed for process control and chemical plants

Failure Mode and Effects Analysis (FMEA)

  • This is a bottom-up approach to analyzing components for faults, determining immediate triggers and where they propagate to
  • Fault Tree Analysis (FTA)
  • This is a complementary approach which reverse engineers, trying to determine the ultimate cause and steps that led to a fault
  • What is there to say about this?
  • Once a system is complete, but not out the door, it should be security tested
  • The fundamental problem was mentioned earlier
  • It is not too difficult to test that it does what it should
  • It is an unbounded task to test that it doesn’t do what it shouldn’t
levels of testing
Levels of Testing
  • Unit Test: Testing individual components and modules
  • Integrating Test: Testing of collections of components in operation together
  • Functional Test: Does the system perform the required functions?

Performance Test: Does it meet performance requirements?

  • Acceptance Test: This is the final test “in the lab” to see if it meets all customer requirements
  • Installation Test: Does it work outside of the lab in a real world environment?
other aspects of testing
Other Aspects of Testing
  • Unit-Integration-Functional
  • These are tests against the design
  • Performance-Acceptance-Installation
  • These are tests against user expectations
  • Note that a flaw may exist in the specifications and design
  • Code may conform to the design but not satisfy customer requirements

Regression Test: This is a test that is run after changes have been made to an installed system

  • This is potentially very important for security
  • *******
  • Types of testing:
  • Black box
  • Clear box (white box, transparent box)
good design
“Good Design”
  • Process activities for assuring design quality:
  • Use a philosophy of fault-tolerance
  • Have a consistent policy for failure handling
  • Capture the design rationale and history (document the system from the beginning)
  • Use (established) design patterns
aspects of fault tolerance
Aspects of Fault Tolerance
  • Passive fault detection: Fault and fail—hopefully gracefully
  • Active fault detection: For example, mutual suspicion—anticipate faults and forestall failure
  • Fault tolerance: Expect faults to occur
  • However, don’t fail
  • Try to write code that will continue to function even in the face of faults
types of failures
Types of Failures
  • Failure to provide a service
  • Providing the wrong service or data
  • Corrupted data
consistent policy for failure handling
Consistent Policy for Failure Handling
  • Alternatives:
  • Retry: Restore to a previous state
  • Run again using a different strategy
  • Correct: Restore to a previous state and correct error conditions
  • Run again using the same strategy
  • Report: Restore to a previous state
  • Report error and don’t run again
  • Try to predict system risks and their importance
  • This leads to design decisions
  • Use simpler security measures for smaller risks
  • Use stronger or multiple levels of security for larger risks
static analysis
Static Analysis
  • As noted earlier, these tasks can be at least partially automated:
  • Analyze control flow structure
  • Analyze data flow structure
  • Analyze data structure
  • The more complex, deeper, less standard each design feature is, the greater the risk of mistakes or of successful inclusion of malicious code
configuration management
Configuration Management
  • In practice, configuration management is a significant concern
  • Types of changes to software over time include:
  • Corrective changes (day-to-day running)
  • Adaptive changes (design modifications)
  • Perfective changes (improvements)
  • Preventive changes (typically performance oriented)
  • Who is authorized to make them?
  • How are they tracked?
configuration management activities
Configuration Management Activities
  • Configuration identification (versions)
  • Configuration control and change management (tracking changes)
  • Configuration auditing (checking changes)
  • Status accounting (source, version, etc. of system components)
  • These tasks are all bureaucratic book-keeping
  • However, they are necessary if you hope to be running verified code
analysis of mistakes
Analysis of Mistakes
  • Learning from mistakes has already been mentioned a couple of times
  • Some lame quotes are included on the following overheads

"If our history can challenge the next wave of musicians to keep moving and changing, to keep spiritually hungry and horny, that's what it's all about.“

  • Carlos Santana

History is more or less bunk. It's tradition. We don't want tradition. We want to live in the present and the only history that is worth a tinker's dam is the history we made today.

  • Henry Ford

“Most people are prisoners, thinking only about the future or living in the past. They are not in the present, and the present is where everything begins.”

  • Carlos Santana
proofs of program correctness
Proofs of Program Correctness
  • Mentioned earlier—proofs of equivalence aren’t possible
  • Another approach:
  • Specify programs mathematically
  • Then prove step-by-step that they reach the desired result
  • This has something in common with static analysis

This is not really practical for large, complex programs

  • Problem: People have trouble translating a program into formal mathematical notation
  • If you could do this translation without trouble, maybe you could just write correct programs without trouble…
  • Elements of this paradigm have been adopted by some development organizations
programming practice conclusions
Programming Practice Conclusions
  • There are lots of approaches
  • There is no silver bullet
  • Some combination of approaches may be of some use
  • In organizations that do have a standard, formal approach to secure software development, you will learn, “This is how we do it here.”
  • Have I mentioned recently that programming is not really an engineering discipline?
standards of program development
Standards of Program Development
  • Security standards should include:
  • Design standards
  • Documentation, language, coding style standards
  • Programming and review standards
  • Testing standards
  • Configuration and management standards
process standards
Process Standards
  • Various bodies have established standards for how organizations do things
  • The implementation of these process standards should have a positive effect on the security quality of code produced under them
  • Software Engineering Institute (SEI), Capability Maturity Model (CMM)
  • International Standards Organization (ISO), ISO 9001 (quality management standards)
  • National Security Agency (NSA), System Security Engineering CMM (SSE CMM)
  • Quotes from the textbook:
  • “Software development is both an art and a science… Just as a great painter will achieve harmony and balance in a painting, a good software developer who truly understands security will incorporate security into all phases of development.”
  • Translation: We can’t tell you how to do this. Good luck.
howstuffworks com
  • Achieving Nirvana
  • The Buddha couldn't fully relate his new understanding of the universe, but he could spread the essential message of his enlightenment and guide people toward achieving the same understanding. He traveled from place to place teaching the four noble truths:
  • Life is suffering.
  • This suffering is caused by ignorance of the true nature of the universe.
  • You can only end this suffering by overcoming ignorance and attachment to earthly things.
  • You can overcome ignorance and attachment by following the Noble Eightfold Path.
research question
Research Question
  • Do you suppose also has a page explaining “How Christian Salvation Works”?
  • How about other religions?

“A university is what a college becomes when the faculty loses interest in students.”

  • John Ciardi

A university professor set an examination question in which he asked what is the difference between ignorance and apathy. The professor had to give an A+ to a student who answered: I don't know and I don't care. Richard Pratt, Pacific Computer Weekly, 20 July 1990

luke and welling and presentation information fall of 2011
Luke and Welling and Presentation Information, Fall of 2011
  • As of 11/1/2011 there are 15 students in class
  • Luke and Welling: 3 days set aside
  • 5 student presentations per day
  • 15 minutes per presentation
  • As for the other presentations: 2 days
  • 7.5 students per day
  • 10 minutes per presentation
survey of luke and welling
Survey of Luke and Welling
  • III E-commerce and Security
  • Ch. 14, skip
  • Ch. 15, skip
  • Ch. 16, Web Application Security
  • First subsections, skip
  • Major topic 1:
  • Pg. 367, Securing Your Code

Major topic 2:

  • Pg. 378, Securing Your Web Server and PHP
  • Pg. 383, Database Server Security
  • Pg. 385, Protecting the Network
  • Pg. 387, Computer and Operating System Security
  • Pg. 388, Disaster Planning, skip
  • Pg. 390, Next, skip

Ch. 17, Implementing Authentication with PHP and MySQL

  • Major topic 3:
  • Pg. 391, Identifying Visitors
  • Pg. 392, Implementing Access Control
  • Pg. 399, Using Basic Authentication

Major topic 4:

  • Pg. 400, Using Basic Authentication in PHP
  • Pg. 402, Using Basic Authentication with Apache’s .htaccess Files
  • Pg. 406, Using mod_auth_mysql Authentication
  • Pg. 408, Creating Your Own Custom Authentication
  • Pg. 409, Further Reading/Next, skip

Ch. 18, Implementing Secure Transactions with PHP and MySQL

  • Major topic 5:
  • Pg. 409, Providing Secure Transactions
  • Pg. 413, Using Secure Socket Layers
  • Pg. 417, Screening User Input
  • Pg. 417, Providing Secure Storage
  • Pg. 419, Storing Credit Card Numbers

Major topic 6:

  • Pg. 419, Using Encryption in PHP
  • Pg. 427, Further Reading, skip

Ch. 23, Using Session Control in PHP

  • Major topic 7
  • Pg. 509, What Is Session Control?
  • Pg. 509, Understanding Basic Session Functionality
  • Pg. 512, Implementing Simple Sessions
  • Pg. 514, Creating a Simple Session Example
  • Pg. 516, Configuring Session Control

Major topic 8:

  • Pg. 517, Implementing Authentication with Session Control
  • Pg. 524, Further Reading, skip

Ch 27, Building User Authentication and Personalization

  • Major topic 9:
  • Pg. 569, Solution Components
  • Pg. 571, Solution Overview
  • Pg. 573, Implementing the Database
  • Pg. 574, Implementing the Basic Site

Major topics 10 and 11 (double):

  • Pg. 577, Implementing User Authentication
  • Major topic 12:
  • Pg. 596, Implementing Bookmark Storage and Retrieval
  • Pg. 602, Implementing Recommendations
  • Pg. 606, Considering Possible Extensions/Next, skip

In summary:

  • Day 1: Major topics 1-4
  • Day 2: Major topics 5-8
  • Day 3: Major topics 9-12