1 / 17

Active Virtual Network Management Prediction

Active Virtual Network Management Prediction. Stephen F. Bush. DARPA demo performed in collaboration with: Amit Kulkarni (GE CRD) Virginie Galtier, Yannick Carlinet and Kevin L. Mills (NIST). TERENA Networking Conference May 14-17, 2001. Active Network Benefits.

ehren
Download Presentation

Active Virtual Network Management Prediction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Active Virtual Network Management Prediction Stephen F. Bush DARPA demo performed in collaboration with: Amit Kulkarni (GE CRD) Virginie Galtier, Yannick Carlinet and Kevin L. Mills (NIST) TERENA Networking Conference May 14-17, 2001

  2. Active Network Benefits • Faster hardware more fully utilized • Enables more flexible network • De-couples protocol from transport • Minimizes global agreement overhead • Enables on-the-fly experimentation • Enables faster deployment of new services

  3. Active Network Framework PP CPU Model AA AA AA AA EE 1 EE 2 AA Active Audio NodeOS Hardware • Active Application (AA) Active network application AVNMP, AudioApp • Execution Environment (EE) Analogous to a Unix shell for packet execution Magician, ANTS • Node Operating System (NodeOS) Operating System support for EEs Magician EE

  4. Active Network Encapsulation Protocol (ANEP) Flags Type ID Version ANEP Header Length ANEP Packet Length Options Payload • Options • Source Identifier 1 • IPv4 address (32 bits) 1 • IPv6 address (128 bits) 2 • 802.3 address (48 bits) 3 • Destination Identifier 2 • Same addressing schemes • Integrity Checksum 3 • 16 bit one's complement of • the one's complement sum of the entire ANEP packet, starting with the ANEP Version field • N/N Authentication 4 • Non-Negotiated Authentication • SPKI Self-signed Certificate 1 • X.509 Self-signed Certificate 2 Allows encapsulation of active packets in any transport media • Payload • Any data or code to be executed by an EE • ANTS code • Magician code • ASP code • SmartPacket code • PLAN code

  5. Benefits of Self-Prediction • Enables management of more complex systems such as active networks; leading towards self-healing and self-management • Optimal management polling interval is determined based upon predicted rate of change and fault probability • Fault correction will occur before system is impacted • Time to perform dynamic optimization of repair parts, service, and solution entity (such as software agent or human user) co-ordination • Optimal resource allocation and planning • “What-if” scenarios are an integral part of the network • AVNMP-enhanced components protect themselves by taking action, such as migrating to “safe” hardware before disaster occurs

  6. Injecting a Model into the Network LP LP LP DP Goal: Prediction for Management Network Management Client getnext 1.3.6.1.x.x.x.x.t getnextresponse 1.3.6.1.x.x.x.x.t+L State Queue (SQ) SNMP Query Managed Object Active Packet Distributed model-based prediction capability within system Deployment: Optimal use of space and time AN-1 L-2 AN-5 AN-4 L-3 L-4 L-1 Virtual System Space AN-1 L-2 AN-5 AN-4 L-3 L-4 L-1 Real System Time

  7. AVNMP Architecture PP PP CPU Model PP PP Routing Model PP PP Other Potential Models SNMP AA AA Active Audio Active Audio ABONE Sending node Fastest Intermediate Node Slowest Intermediate Node Destination node Injected Models Magician AAs LP PP Predictor AVNMP updates predicted MIB values AVNMP AA MIB Injected Applications Magician EE

  8. Cyclic Prediction Refinement 8000 8000 Load (packets/second) Load (packets/second) 6000 6000 4000 4000 2000 2000 0 0 LVT (minutes) LVT (minutes) 20 20 20 20 Wallclock (minutes) Wallclock (minutes) 40 40 • Prediction ends when preset look ahead is reached • Previous predictions are refined as time progresses 07/07/00 07/07/00 11 11

  9. Accuracy-Performance Tradeoff Speedup Prediction Error Out of Tolerance Messages Experiment involved demanding more accuracy over time by reducing the error between predicted and actual values, however... … this required more out-of-tolerance messages... Look-ahead …. and loss in speedup …the tradeoff was loss in Look-ahead...

  10. AVNMP Algorithm PP AVNMP Model LP Logical Process • Prediction performance continuously kept within tolerance via rollback • Time Warp-like technique used for maximum use of space and time in virtual system • Rollback State Cache holds MIB future values

  11. CPU and Load Applications Driver LP LP DP PP PP Predictor Predict Resource Use, Including CPU, Throughout an Active Network Demonstrate predictive power of AVNMP and improvement in predictive power when combining NIST CPU usage models with AVNMP With the NIST CPU usage model integrated, AVNMP requires fewer rollbacks Sending node Fastest Intermediate Node Slowest Intermediate Node Destination node Green Black Red Yellow And so AVNMP can predict CPU usage further into the future

  12. CPU Application Results Predict Resource Use, Including CPU, Throughout an Active Network TTL CPU Prediction Better CPU prediction model overcomes performance tradeoff limitations

  13. Accomplishments • Demonstrated the power of AVNMP to predict resource usage, including CPU, throughout an active network • Showed that AVNMP can predict network-wide resource consumption • Compared accuracy of AVNMP CPU usage predictions with and without the NIST CPU usage models • Illustrated benefits when AVNMP provides more accurate predictions • Demonstrated the ability to detect and kill malicious or erroneous active packets • Illustrated motivation behind CPU usage modeling • Showed improvement of NIST CPU usage models over naive scaling • Demonstrated management of CPU prediction and control of packets on per-application basis by an EE (Magician probably the first of its kind) • Developed MIB for CPU and AVNMP Management of an active node • Integrated SNMP agents and reporting in an EE • Provided user-customizable event reporting through multiple mechanisms:Event Logger and SNMP

  14. Denial-of-Service Attacks Can a combination of AVNMP load prediction and NIST CPU prediction be used to combat denial of service attacks? NIST CPU Prediction AVNMP Model AVNMP Load Prediction Model Large CPU packets Many small packets Attacker Legitimate Data Legit User Target

  15. DARPA Fault-Tolerant Networks Project Identify faults within a complex system of management objects Scale in number of objects andnumber of futures Robust in the presence of faults Only necessary and sufficient repair capability should exist in time and space Portion of Solution Portion of Solution Receptor Receptor Fault Receptor Receptor Random (Healthy) incompressible Order (Multiple Faults) compressible Portion of Solution Portion of Solution Network Inherently Forms Fault-Corrective Action Attraction No Attraction

  16. New Theory of Networks Leads to ... Legacy Legacy Active Active Networks Networks Networks Networks Shannon Shannon Shannon Kolmogorov Kolmogorov Kolmogorov Entropy Entropy Entropy Complexity Complexity Complexity Fine-grained model Bits Bits Bits as active packet as active packet as active packet is communication is communication is communication media media media

  17. References

More Related