1 / 8

Software Surveyor Measures of Success

Software Surveyor Measures of Success. David Wells Object Services (OBJS).

craig-mcgee
Download Presentation

Software Surveyor Measures of Success

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software SurveyorMeasures of Success David Wells Object Services (OBJS) The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government

  2. Kinds of Success • Software Quality • Probe & Gauge Coverage • Gauge Precision • Analysis Capability • Task-Specific Evaluation • Scenario-Based Evaluation Software Surveyor probes, gauges, and infrastructure tools can be evaluated at several (increasingly meaningful) levels:

  3. Software Quality • Supporting software will be externally used by Columbia, WPI, BBN, and USC/ISI. • Gauges will be demonstrated in the context of the GeoWorlds demo in May 2001. • Gauges will be applied to typical bugs reported on the GeoWorlds Bug Reporting List. The quality of Software Surveyor probes, gauges, and ancillary tools can be evaluated through use by outside groups:

  4. Probe & Gauge Coverage • How completely and accurately can the gauges map an application’s changing configuration? • a function of the ability to place probes at component boundaries (which is in turn dependent on the ability to probe in various technologies, collect the required information at these points, and deal with security restrictions that might detailed preclude reporting). • in Y1, we will only capture information within the Java runtime; additional probing of DDLs and CORBA will be done in future years. • Given that a complete configuration graph may be impossible to construct, how well can the gauges identify and address uncertainty in the graph? • Is the level of completeness and accuracy that can be achieved for a configuration graph useful to an administrator or user? Probes and gauges can be evaluated by how well they perform their intended task.

  5. Gauge Precision • Between components within processes (fine grain - narrow scope) & between processes (coarse grain - wider scope). • The process by which the connection was made • identity of the entity(s) that created the connection (linker, HTTP, CORBA ORB, Trader, manual, ...) • arguments used in creating the connection • source for the arguments (function call, file, …) • how were “open point” arguments resolved? (i.e., to what values) • is the connection static or dynamic? • when was the connection made & modified? • Whether & how the connection has been used. The amount of detail that a gauge can provide is an important measure of the potential usefulness of the gauge, since w/o knowing how and why a configuration choice was made, it is difficult to determine if the choice is desirable or how to fix it.

  6. Analysis Capability • Is it possible to match graphs so that corresponding components fill the same roles in both graphs? I.e., can matching be done preserving component roles as well as graph topology? • Is the matching accurate? • Can matching be performed when portions of graphs are unknown? • How fast is the matching as a function of graph size? Is it fast enough to be useful? Software Surveyor will provide analysis tools to compare configuration graphs and to match reified configurations to design specifications.

  7. Task-Specific Evaluation • Improved diagnostic & debugging for multi-technology distributed software. Goal = 75% reduction in time to identify configurations and activity patterns. • Increased ability to evolve distributed software.Goal = provide 75% of detailed configuration & usage status info needed by evolution planners. • Low development & runtime overhead.Goal = automatic or GUI-enabled insertion & 1% runtime penalty • Reduced component footprintGoal = 10-90% reduction in size of component footprints by identifying unused libraries or portions thereof (applicable only when such excess footprint exists) Software Surveyor gauges can be evaluated based on how the information they provide facilitates certain specific software maintenance and debugging tasks:

  8. Scenario-Based Evaluation • How efficiently GeoWorlds can be installed in different environments and its services deployed. • How easily complex information management tasks can be scripted with assured semantic and syntactic interoperability. • How reliably the scripts can be executed while maintaining desired quality. • How dynamically the scripts can be evolved based on resource availability and requirement changes. • How efficiently new services can be added to GeoWorlds while maintaining compatibility Software Surveyor success will be measured by how well it, in combination with other DASADA gauges, can improve the lifecycle behavior of a complex, distributed application. The GeoWorlds intelligence-analysis application is already in use at PACOM and improvements to its lifecycle behavior can be measured against historical data. Specifically:

More Related