Cse 555 protocol engineering
This presentation is the property of its rightful owner.
Sponsored Links
1 / 45

CSE 555 Protocol Engineering PowerPoint PPT Presentation


  • 70 Views
  • Uploaded on
  • Presentation posted in: General

CSE 555 Protocol Engineering. Dr. Mohammed H. Sqalli Computer Engineering Department King Fahd University of Petroleum & Minerals Credits: Dr. Abdul Waheed (KFUPM) Spring 2004 (Term 032). Protocol Engineering.

Download Presentation

CSE 555 Protocol Engineering

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Cse 555 protocol engineering

CSE 555Protocol Engineering

Dr. Mohammed H. Sqalli

Computer Engineering Department

King Fahd University of Petroleum & Minerals

Credits: Dr. Abdul Waheed (KFUPM)

Spring 2004 (Term 032)


Protocol engineering

Protocol Engineering

Reference:“Communication Protocol Specification and Verification” by Richard Lai and Ajin Jirachiefpattana, Kluwer Academic Publishers, 1998.


Protocol engineering1

Protocol Engineering

  • Development of a systematic methodology to progress from an initial protocol requirement for a system to the realization of an implementation

  • Concerned with activities involved with design and implementation of a protocol using FDTs during its stages of development

  • Subset of software engineering

1-2-3


A protocol development methodology waterfall model

A Protocol Development Methodology(Waterfall model)

User Requirements

Informal Specification

Formal Specification

Errors detected

Protocol Verification

Implementation Development

Errors detected

Conformance Testing

Interoperability Testing

Maintenance

1-2-4


A protocol development methodology cont

A Protocol Development Methodology (Cont.)

  • Specification: definition of an object or a class of objects, by way of an FDT.

    • Informal Specification: user requirements are studied and desired communication services are then specified in plain language. This includes: protocol model, services provided, reliability and handling of errors.

    • Formal Specification: specification using a formal language. This enables the specifications to be analyzed.

1-2-5


A protocol development methodology cont1

A Protocol Development Methodology (Cont.)

  • Protocol Verification: process of examining this specification for the presence of various errors that could lead to improper system operation.

    • Attempt to demonstrate the correctness and consistency of a protocol design.

    • Correctness: protocol performs according to specification

    • Consistency: protocol’s behavior can be determined

    • Automated protocol verification: the use of computer tools to verify a communication protocol based on its formal specification (Billington et al. 1985)

    • If errors are detected, the specification is iterated.

1-2-6


A protocol development methodology cont2

A Protocol Development Methodology (Cont.)

  • Implementation Development: realization of protocol specifications on a physical system.

    • Development of a communication software

    • Can be derived manually or automatically from formal specifications using implementation tools.

1-2-7


A protocol development methodology cont3

A Protocol Development Methodology (Cont.)

  • Protocol Testing

    • Conformance testing: validation of protocol implementation by testing.

      • Whether protocol implementation conforms to protocol specification rules

      • Which defined options are supported by the implementation

      • Protocol is fed with selected set of test input

      • Output generated are observed and checked against specifications

      • Formal specification may be used as the basis for deriving test sequences

    • Interoperability testing: validation of the protocol implementation using implementations from different manufacturers.

      • Ensures the protocol achieves the purpose of open communication.

1-2-8


Protocol properties

Protocol Properties

  • Usually defined informally

  • Definitions of the properties of a protocol (Sajkowski, 1985):

    • Deadlock Freeness = a protocol never enters a state in which no further progress is possible

    • Safety = a protocol never enters an unacceptable state

    • Liveness = every state of the system is reachable

    • Boundedness = during the operation of a protocol, the number of messages in each channel between protocol entities can never exceed the capacity of the channel.

    • Termination or Progress = completion of required services of a protocol within a finite amount of time (reaches desired final states in terminating protocols or initial state in cyclic protocols)

1-2-9


Protocol properties cont

Protocol Properties (Cont.)

  • Definitions of the properties of a protocol (Sajkowski, 1985):

    • Livelock Freeness = a system will never get into a situation where non-productive cycles are possible and the exchange of the same messages are performed

    • Mutual Exclusion = certain actions in both users of a protocol cannot be executed simultaneously (e.g., access to same resource)

    • Partial Correctness = a protocol is guaranteed to provide a required service if it reaches its desired final state

    • Absence of Overspecification = lack of redundant parts in protocol specification. E.g., absence of unexercised message receptions.

    • Completeness = descriptions of the reactions to all possible inputs are included in protocol specifications (i.e., specification if free of unspecified message receptions)

1-2-10


Protocol properties cont1

Protocol Properties (Cont.)

  • Definitions of the properties of a protocol (Sajkowski, 1985):

    • Recovery from failures = after any abnormal situation, a protocol will return to a normal state within a finite amount of time

    • Fairness = every protocol entity gets a chance to make progress, regardless of what others do

    • Total Correctness = a protocol is partially correct and additionally reaches its desired final state (in the case of a terminating protocol) or its initial state (in the case of a cyclic protocol), for all allowed sequences of inputs (messages or user commands)

  • These protocol properties are not necessarily independent from each other

  • The possession of these properties do not ensure that a communication protocol will be working as expected, but it ensures that the protocol does not enter into an undesirable state

1-2-11


Design and analysis of communication software

Design and Analysis of Communication Software

Credits: David L. Dill (Stanford University), Patrice Godefroid (Bell Labs), Joost-Pieter Katoen (University of Twaente), and High Integrity Embedded Systems Group (University of Northumbria)


Peculiarities of communication software

Peculiarities of Communication Software

  • Communication software coordinates the information flow between interconnected components in:

    • Telephone networks

    • Internet

    • Wireless networks (cell phones, pagers, etc.)

    • Distributed databases (e.g., flight reservation systems)

  • Communication software is widely used

  • Communication software is hard to develop and test

1-2-13


Overview

Overview

  • Main steps of the software development process.

  • Main tools used for each of these steps in industry today.

  • More detailed discussion on testing.

1-2-14


Software development process

Software Development Process

  • The waterfall model

  • In real-life, steps overlap and have feedback loops

1-2-15


Software development steps

Software Development Steps

  • Requirements specifications and analysis:

    • Determine customer-visible features, feasibility study, development costs, and price of the product

    • Determine “what to do”

    • Done by “system engineers”

  • Design:

    • Determine high-level and detailed design of product that will meet requirements

    • Determine “how to do it”

    • Done by the “architects”

1-2-16


Software development steps cont d

Software Development Steps (Cont’d)

  • Coding and unit testing:

    • Produces the actual code that will be delivered to the customer

    • Test individual modules in isolation

    • Done by “developers” (aks “programmers”)

  • Integration and system testing

    • Test the integration of individual modules and the whole system

    • Testing implies running the code

    • Done by “testers”

  • Delivery and maintenance

    • Deliver the product to the customer and provide documentation, training, field support, and bug fixes

  • “Product manager” manages the end-to-end process

1-2-17


Development tools

Development Tools

  • What are the most common tools used at each step of the development process today in the software industry?

  • Warnings:

    • The list that follows is not exhaustive

    • Only general-purpose tools are considered, not application-specific tools (for GUI, web, databases,…).

    • Names of commercial products are used only as examples, for illustration purposes only.

1-2-18


Tools for requirements specification and analysis

Tools for Requirements Specification and Analysis

  • MS Word and PowerPoint!!

    • Done informally

    • Requirements are often imprecise, ambiguous, and incomplete

    • This can be done partly on purpose…

  • “Formal Notations”:

    • Use-cases, state machines, message sequence charts, tables, decision trees, etc.

    • Less ambiguous than English text

    • Enable simple automatic analysis of specification (check for consistency)

    • Can only cover a subset of requirements

    • In practice, used in conjunction with English text

1-2-19


Tools for design

Tools for Design

  • MS Word and PowerPoint…

    • With diagrams, tables, state-machines, message sequence charts, etc.

  • Modeling Languages (for design and high-level coding)

    • UML, SDL, ObjectTime, VFSM, etc.

    • Less ambiguous than English text

    • Enables automatic analysis

    • Code can be generated automatically of a template of the implementation

1-2-20


Tools for coding

Tools for Coding

  • Defect tracking and resolution managers:

    • Track problem reports and the status of their resolution

    • Record “history” of the system

    • Also used by testers as well as during maintenance

  • Version control systems:

    • Controls and coordinates the various versions of the software (e.g., SCCS, CVS, etc.)

  • Code browsers and editors:

    • Help navigate through the code

    • Also help navigate the history of the code

    • Help compare different version of the code (e.g., diff)

1-2-21


Tools for coding cont d

Tools for Coding (Cont’d)

  • Compilers

    • Translate high-level source language to lower-level target language

    • Report syntax errors

  • Linkers

    • Combine mutually referencing object code fragments

    • Report errors at module interface level

  • Code reviewers (static analyzers)

    • Examine source code to detect programming errors

    • Provide suggestion on code structure and style (type checkers, Lint, etc.)

    • Automated tools for detecting semantic errors through symbolic execution

    • Colleagues !

1-2-22


Tools for testing

Tools for Testing

  • Debuggers:

    • Requires code instrumentation (usually during compilation)

    • Control and examine code execution

  • Memory analyzers:

    • Detect memory leaks and overflows

      • Memory leaks (= memory allocated, no more reachable but not freed)

      • Memory overflow (= access to unauthorized memory address), (unallocated/uninitialized memory, array out-of-bounds,…).

    • Parse and instrument source or binary code to check properties at runtime

1-2-23


Tools for testing cont d

Tools for Testing (Cont’d)

  • Performance analyzers (profilers) and code coverage tools:

    • Count number of occurrences of execution of program statements or procedures

    • Report time spent in each part of the program execution

    • Parse and instrument source or binary code to record runtime information

  • Languages and platforms for test automation:

    • Example: expect

  • Capture/replay tools:

    • Record/replay actions performed during manual testing at standard interface

    • Example: GUI/web testing

1-2-24


Tools for testing cont d1

Tools for Testing (Cont’d)

  • Load Generators:

    • Simulate environment through standard interface

    • Example: network traffic

  • Test case generation from specification:

    • Generate sets of tests from higher-level specification of I/O behavior

    • Easier test management with better coverage

  • Test management tools:

    • Process: help record test plans, track, and report the status of testing project

    • Code: store and execute test code, compare, and store results

    • Used by testing organizations only

1-2-25


More on testing

More on Testing

  • Why test?

    • “To find errors”

    • “The process of executing a program with the extent of finding errors.”

      [Myers,1979]

  • What is an “error”?

    • “Any problem visible to the end user”

    • Programming errors, conflicts with requirements, unexpected behaviors, features too hard to use, etc.

  • When to stop testing?

    • In theory, when full coverage is reached

    • Coverage can be defined versus requirements, formal I/O, code, or state-space

    • In practice, test until shipment date!

1-2-26


Tools for testing summary

Tools for Testing: Summary

  • Three main types of tools for testing:

    1. Code Inspection: analyses (parses) the code to find programming errors.

    2. Code Instrumentation: analyses (parses) source code or binary code and inserts code (such as assertions) to check properties at run-time.

    3. Code Execution: help generate, execute and evaluate tests performed by running the code in conjunction with a representation of its environment.

1-2-27


Different levels of testing

Different Levels of Testing

  • Level 1: manual testing

    • Most testing organizations

    • Some tests cannot be automated

  • Level 2: automated testing

    • Automated test execution and evaluation

    • Advantage: automated regression testing

  • Level 3: automatic test generation

    • Automated test generation from higher-level specs

    • Advantages: easier test management with better coverage

1-2-28


What is communication software

What is Communication Software?

  • Coordinates the information flow between interconnected components

  • Each component can be viewed as a reactive system

    • I.e., continually interacts with its environment

  • Examples: telephone, internet, wireless, etc.

1-2-29


Peculiarities of communication software revisited

Peculiarities of Communication Software Revisited

  • Communication software is like any other software

    • Same overall development process and general-purpose tools

  • Developing communication

    • Many possible sequences of interactions between components

      • Coordination problems

      • Race conditions

      • Timing issues

  • Testing communication software is harder!

    • Traditional testing provides poor coverage

  • Debugging communication software is harder!

    • Scenarios leading to errors can be hard to reproduce

1-2-30


Why harder

Why Harder?

  • Implementation looks non-deterministic due to concurrency and real-time

    • Related to scheduling and processing speed, respectively

    • “Non-deterministic” means unpredictable

    • Same sequence of inputs does not imply same sequence of outputs

  • Fundamentally, parallel composition is not “compositional”:

    • Given 2 function f(x) and g(x), f(g(x)) is easy to understand

1-2-31


Tools for dealing with concurrency

Tools for Dealing with Concurrency

  • Debuggers for concurrent/distributed systems:

    • Control and track the execution of more than one process/thread

  • Tools for detecting runtime coordination problems:

    • Detect race conditions at runtime

      • simultaneous writes in same address

    • Detect coordination problems at runtime

      • Deadlocks

    • Instrument the execution of the processes/threads while minimizing the impact on timing

    • Record scheduling information (“trace”) for faithfully replaying multiprocess scenarios leading to errors.

    • Generate a consistent representation (snapshot) of the state of a distributed system

    • Example: Eraser, Assure (for Java)

1-2-32


Tools for dealing with real time

Tools for Dealing with Real-Time

  • Schedulability analyzers:

    • Analyze a set of real-time scheduling constraints and generate a schedule if there exists one

    • Constraints are imposed by the architecture and properties to satisfy (specs)

  • Worst-case execution time analyzers:

    • Determine worst-case execution time (WCET) of fragments of code

  • Performance modeling tools:

    • Analyze performance of an architectural model

    • Uses queuing theory, stochastic processes, etc.

1-2-33


Another approach formal verification

Another Approach: Formal Verification

  • What is verification?

    • 4 elements define a verification framework:

      Verification: to check if all possible behaviors of the implementation are compatible with the specification

  • While testing can only find errors, verification can also prove their absence (= exhaustive testing)

  • Example of approaches: theorem proving and model checking

1-2-34


Theorem proving

Theorem Proving

  • Goal: automate mathematical (logical) reasoning

  • Verification through theorem proving:

    • Implementation represented by a logic formula I

    • Specification represented by a logic formula S

    • Does “I implies S” hold?

    • Proof is carried out at syntactic level

  • This framework is very general

    • Many programs and properties can be checked this way

  • However, most proofs are not fully automatic

  • A theorem prover is actually a proof assistant and a proof checker

    • Limited usefulness

1-2-35


Model checking

Model Checking

  • Model checking is more restricted in scope but is fully automatic

  • Verification using model checking:

    • Implementation represented by a finite state machine M (called, state space)

    • Specification represented by a temporal logic formula f

      • Example: Linear-time Temporal Logic (LTL)

        • Specify properties of infinite sequences s0, s1, s2, …. of states

        • Temporal operations include: G (always), F (eventually) and X (next)

        • Example: G(p -> Fq)

    • Does “M satisfies f” hold? (hence the term “model checking”)

      • For LTL, do all infinite computations of M satisfy f?

    • Proof is carried out at semantic level via state-space exploration

1-2-36


State space of a concurrent system

State-Space of A Concurrent System

  • The state space of a concurrent system is a graph representing the joint behavior of all its components.

  • Each node represents a state of the whole system.

  • Each path represents a scenario (sequence of actions) that can be executed by the system.

  • Many properties of a system can be checked by exploring its state space: deadlocks, dead code,… and model-checking.

  • Systematic State Space Exploration is simple:

    • easy to understand,

    • easy to implement,

    • easy to use: automatic!

  • Main limitation: “state explosion” problem! (State-space exploration is fundamentally hard)

  • Many tools are based on systematic state-space exploration: CAESAR, COSPAN, MURPHI, SMV, SPIN, etc.

1-2-37


Formal verification vs testing

Formal Verification Vs. Testing

  • Model checking (e.g., SPIN) can be very effective in detecting subtle design errors

  • In practice, formal verification is actually testing because of approximations:

    • When modeling the system

    • When modeling the environment

    • When specifying properties

    • When performing the verification

  • Therefore, “bug hunting” is really the name of the game!

1-2-38


Formal verification vs testing cont d

Formal Verification Vs. Testing (Cont’d)

  • Model based testing

1-2-39


Applications hardware

Applications: Hardware

  • Hardware verification is a booming application of model checking and related techniques

    • The finite-state assumption is not unrealistic for hardware

    • The cost of errors can be enormous (e.g., Pentium bug)

    • The complexity of designs is increasing very rapidly (e.g., system on a chip)

  • However, model-checking still does not scale very well

    • Many designs and implementations are too big and complex

    • Hardware description languages (Verilog, VHDL, etc.) are very expressive

    • Using model checking property requires experience

1-2-40


Applications software models

Applications: Software Models

  • Analysis of software models: (e.g., SPIN)

    • Analysis of communication protocols, distributed algorithms

    • Models specified in extended FSM notation

    • Restricted to design

  • Analysis of software models that can be compiled (e.g., SDL, VFSM)

    • Same as above except that FSM can be compiled to generate the core of the implementation

    • More popular with software developers since reuse of “model” is possible

    • Analysis still restricted to “FSM part” of the implementation

1-2-41


Summary formal methods

Summary: Formal Methods

  • Formal methods are based on applied discrete mathematics

    • Set theory

    • Logic

    • Languages, analysis techniques, and tools

  • Applicable to the development of

    • Computer hardware

    • Computer software

  • Used for:

    • Description

      • Specifying requirements

      • Precise and unambiguous

    • Analysis

      • Demonstrate that program function implements design

      • Rigor of mathematics helps develop convincing argument

      • Reduces reliance on human intuition and judgment

1-2-42


Summary reality gap

Summary: Reality Gap

  • Formal methods help fill the reality gap

1-2-43


Summary filling the reality gap

Summary: Filling the Reality Gap

  • Informally: review, testing

    • Formal methods are not complete solutions

  • How to fill the remaining space

    • Formal specification of required properties

    • Formal model of the delivered system

    • Formal demonstration that the system model has the required properties

  • Is it worth the trouble?

    • Apply only for design phases

    • Apply only for critical components and properties

1-2-44


Summary validation techniques

Summary: Validation Techniques

  • Wide spectrum of languages and inference rules

    • Process algebras: CCS, CSP

    • Logical methods

      • Set theory, predicate calculus

      • Temporal logics

      • Rewriting logic: OBJ, Maude

  • Proving equivalence or entailment

    • Not easy to fully automate

  • Theorem proving assistants

  • Model checkers

    • SPIN, SMV, UPPAAL, KRONOS

    • This approach can be fully automated

1-2-45


  • Login