From uml to performance models high level view
This presentation is the property of its rightful owner.
Sponsored Links
1 / 10

From UML to Performance Models: High Level View PowerPoint PPT Presentation


  • 95 Views
  • Uploaded on
  • Presentation posted in: General

From UML to Performance Models: High Level View. Dorina C. Petriu Gordon Gu Carleton University well formed annotated UML models introduction to LQN high-level view of the transformation www.sce.carleton.ca/rads/puma/. Well-formed annotated UML model.

Download Presentation

From UML to Performance Models: High Level View

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


From uml to performance models high level view

From UML to Performance Models: High Level View

Dorina C. Petriu

Gordon Gu

Carleton University

well formed annotated UML models

introduction to LQN

high-level view of the transformation

www.sce.carleton.ca/rads/puma/


Well formed annotated uml model

Well-formed annotated UML model

  • key use cases described by representative scenarios

    • frequently executed, have performance constraints

  • resources used by each scenario

    • resource types: active or passive, physical or logical, hardware or software

      • examples: processor, disk, process, software server, lock, buffer

    • quantitative resource demands must be given for each scenario step

      • how much, how many times?

  • workload intensity for each scenario

    • open workload: arrival rate of requests for the scenario

    • closed workload: number of simultaneous users


  • Software architecture and deployment

    Software architecture and deployment

    DEserver

    DEclient

    Client Server

    server

    client

    <<GRMdeploy>>

    <<GRMdeploy>>

    1..n

    1..k

    <<PAresource>>

    <<PAresource>>

    DEclient

    Ethernet

    Retrieve

    <<PAhost>>

    <<PAhost>>

    ServerCPU

    ClientCPU

    <<PAresource>>

    SDiskIO

    <<PAresource>>

    DEserver

    <<PAresource>>

    Sdisk


    Scenario with performance annotations

    Scenario with performance annotations

    Client

    RetrieveT

    SDiskIOT

    wait_S

    wait_D

    request document

    <<PAstep>>

    {PAdemand=(‘msrd’,

    <<PAclosed Load>>

    ’mean’,(220/$cpuS,’ms’))}

    {Papopulation = $Nusers}

    accept request

    <<PAstep>>

    {PAdemand=(‘msrd’,’mean’,

    (1.30 + 130/$cpuS,’ms’))}

    <<PAstep>>

    read request

    {PAdemand=(‘asmd’,’mean’

    <<PAstep>>

    (0.5,’ms’)),

    {PAdemand=(‘msrd’,’mean’,

    PAextOp=(‘net1’,1)

    (35/$cpuS,’ms’))}

    update logfile

    PArespTime=

    (‘req’,’mean’, (1,’sec’),

    (‘pred’,’mean’,$RespT)}}

    <PAstep>>

    {PAdemand=(‘msrd’,’mean’,

    write to logfile

    (0.70,’ms’)),

    <<PAstep>>

    PAextOp=(‘writeDisk’, $RP’)}

    {PAdemand=(‘msrd’,’mean’,

    (25/$cpuS,’ms’))}

    parse

    request

    <<PAstep>>

    {PAdemand=(‘msrd’,’mean’,

    get

    ($gcdC/$cpuS,’ms’))}

    document

    <<PAstep>>

    <<PAstep>>

    read

    {PAdemand=(‘msrd’,’mean’,

    {PAdemand=(‘msrd’,’mean’,

    from disk

    ($cdS,’ms’)),

    ($scdC/$cpuS,’ms’)),

    PAextOp=(‘readDisk’, $DocS’)}

    PAextOp=(‘net2’,$DocS’)}

    <<PAstep>>

    send

    {PAdemand=(‘asmd’,

    document

    ’mean’, (1.5,’ms’))}

    <PAstep>>

    {PAdemand=(‘msr’,’mean’,

    receive document

    recycle thread

    (170/$cpuS,’ms’))}


    Layered queueing network lqn model http www sce carleton ca rads lqn lqn documentation

    clientE

    ClientT

    Client

    CPU

    DBWrite

    DBRead

    DB

    DB

    CPU

    DKWrite

    DKRead

    Disk

    DB

    Disk

    Layered Queueing Network (LQN) model http://www.sce.carleton.ca/rads/lqn/lqn-documentation

    • Advantages of LQN modeling

      • models software tasks (rectangles) and hardware devices (circles)

      • represents nested services (a server is also a client to other servers)

      • software components have entries corresponding to different services

      • arcs represent service requests (synchronous and asynchronous)

      • multi-servers used to model components with internal concurrency

    • What can we get from the LQN solver

      • Service time (mean, variance)

      • Waiting time

      • Probability of missing deadline

      • Throughput

      • Utilization


    Uml lqn transformations mapping the structure

    XCPU

    UML -> LQN Transformations: Mapping the structure

    1..n

    1..n

    <<

    PAresource

    >>

    <<

    PAresource

    >>

    <<

    <<

    PAresource

    PAresource

    >>

    >>

    (1)

    (1)

    Comp

    Comp

    CompT

    CompT

    Comp

    Comp

    <<GRMdeploy>>

    (5)

    (5)

    <<

    <<

    PAhost

    PAhost

    >>

    >>

    XCPU

    (2)

    XCPU

    XCPU

    <<

    <<

    <<

    PAresource

    PAresource

    PAresource

    >>

    >>

    >>

    (2)

    Active

    Active

    Active

    Active

    (3)

    (3)

    <<

    PAresource

    >>

    (3)

    <<

    PAresource

    >>

    <<

    <<

    PAhost

    PAhost

    >>

    >>

    <<

    PAhost

    >>

    ThreadT

    XCPU

    XCPU

    XCPU

    Thread

    Thread

    XCPU

    XCPU

    XCPU

    <<GRMdeploy>>

    (6)

    (6)

    (4)

    (4)

    (4)

    <<

    PAhost

    >>

    <<

    <<

    <<

    PAhost

    >>

    <<

    PAresource

    >>

    Ydisk

    Ydisk

    Ydisk

    XCPU

    XCPU

    Ydisk

    Ydisk

    XCPU

    Ydisk


    Uml lqn transformation mapping the behavior

    Client

    Client

    User

    User

    WebServer

    WebServer

    Server

    Server

    waiting

    waiting

    request

    request

    request

    request

    service

    service

    service

    service

    serve request

    serve request

    wait for reply

    wait for reply

    and reply

    and reply

    and reply

    e2, ph1

    e1, ph1

    e1, ph1

    ...

    ...

    e2, ph2

    complete

    complete

    continue

    continue

    work

    work

    service (opt)

    service (opt)

    UML->LQN Transformation: Mapping the Behavior

    e1

    Client

    [ph1]

    Client

    CPU

    e2

    Server

    [ph1, ph2]

    Server

    CPU


    Mapping software architecture and physical devices to lqn

    Mapping software architecture and physical devices to LQN

    Client Server

    server

    DEclientT

    client

    1..n

    1..k

    <<PAresource>>

    <<PAresource>>

    RetrieveT

    Retrieve

    DEclient

    <<PAresource>>

    SDiskIO

    SDiskIOT

    DEserver

    a) Mapping software architecture to LQN tasks

    DEserver

    DEclient

    DEclientT

    Client

    CPU

    <<GRMdeploy>>

    <<GRMdeploy>>

    RetrieveT

    Ethernet

    Server

    CPU

    <<PAhost>>

    <<PAhost>>

    ServerCPU

    ClientCPU

    SDiskIOT

    Sdisk

    <<PAresource>>

    <<PAresource>>

    Sdisk

    b) Mapping physical resources (processors and I/O devices) to LQN devices


    Effect of communication network

    Effect of communication network

    DEclientT

    ClientCPU

    DEserver

    DEclient

    net1

    <<GRMdeploy>>

    <<GRMdeploy>>

    dummy CPU

    Ethernet

    <<PAhost>>

    RetrieveT

    Ethernet

    <<PAhost>>

    ServerCPU

    ServerCPU

    ClientCPU

    <<PAresource>>

    <<PAresource>>

    net2

    Sdisk

    SDiskIOT

    Sdisk


    Groups of scenario steps to lqn entries

    SDiskIOT

    RetrieveT

    DEclientT

    wait_r

    wait_d

    request

    document

    accept request

    read

    request

    entry write

    phase 1

    entry clientE

    phase 2

    update

    logfile

    entry retrieveE

    phase 1

    write to logfile

    parse

    request

    clientE

    [ph2]

    get

    document

    Client CPU

    DEclientT

    read from disk

    send

    document

    net1

    dummy CPU

    net1E

    send

    document

    entry read

    phase 1

    accept request

    retrieveE

    [ph1,ph2]

    RetrieveT

    sever CPU

    Ethernet

    entry retrieveE ph 2

    net2

    net2E

    read

    [ph1]

    write

    [ph1]

    SDiskIOT

    Sdisk

    Groups of scenario steps to LQN entries


  • Login