1 / 37

Marist College Enterprise Computing Research Lab Undergraduate Projects Howard Baker, Andrew Evans, Chris Cordisco ,

Marist College Enterprise Computing Research Lab Undergraduate Projects Howard Baker, Andrew Evans, Chris Cordisco , Junaid Kapadia June 11, 2013. IBM and Marist College Joint Studies . Research. Key Focus Areas Education for a Smarter Planet/Smarter Classrooms

carl
Download Presentation

Marist College Enterprise Computing Research Lab Undergraduate Projects Howard Baker, Andrew Evans, Chris Cordisco ,

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Marist CollegeEnterprise Computing Research LabUndergraduate Projects Howard Baker, Andrew Evans, Chris Cordisco, JunaidKapadiaJune 11, 2013

  2. IBM and Marist College Joint Studies Research • Key Focus Areas • Education for a Smarter Planet/Smarter Classrooms • Open source systems (Sakai enabled on IBM HW/SW, Integration w/existing ERP systems) • Cloud Computing (K-12 SaaS hosting, Virtualization, Project Greystone) • Analytic Tools (Cognos, SPSS, OAAI project) • Virtual Computing Lab (VCL) • Smarter/Dynamic Infrastructure • Technology infrastructure (e.g. systems/data centers, OpenFlow, PaaS) • Virtualization (e.g. hybrid multi-core systems, cloud computing, server provisioning) • Course Development (25+ courses) • z/OS, AIX/Power • Converged Networking, SDN • Cloud/Mobile App Development • Business Intelligence/Analytics Sales • Collaborative Research Projects • Cutting edge technologies – Software Defined Networking • Areas of mutual interest • Direct Sales • Sales Enablement – Briefings/Reference Acct Funding (in-kind/cash) IP/JDA Recruitment Product Ecosystem IBM Gets • Identify Intellectual Property(IP) and/or Joint Development Agreement(JDA) opportunities Marist Gets • IBM Internships • Full-time hire – Many Marist grads at IBM • SUR Grants • Faculty Awards • Collaborative Research Projects • Tools, S/W, Libraries • Access to IBM hardware/software and provides access to many other schools (z and p) • Conference presentations, demos, Workshops, courseware • Recruit top talent Hardware Talent Pool Industry and/or Government Partners ADVA, NEC, CIENA, Brocade, NSF, Ellucian, Sakai Community, NY State, BigSwitch 2

  3. Student Intern Team • Ryan Flaherty - OpenFlow- Grad CS • Andrew Evans - Analytics - Senior CS • Chris Cordisco - NSF Intern - Grad CS • Kevin Pietrow - OpenFlow - Junior CS • Junaid Kapadia - ECRL Projects - Senior IT • Rebecca Murphy - Digital Archives - Junior CS • Mary Miller - OpenFlow - Sophomore CS • Devin Young - OpenFlow - Senior CS • Zachary Meath - OpenFlow - Sophomore CS

  4. MARIST ECRL zEnterprise • Junaid Kapadia • Christopher Cordisco

  5. zEnsemble Layout • Management network • Intranode management network • Intranode management network - extension • Intraensembledata network • FC-attached disk storage

  6. zBX Specifications • 2 PS701 Power blades • 64 GB of memory each • Eight 3.0 GHz processors; 64 "virtual processors" each blade • 300GB Internal HDD • 2 HX5 X blades • Eight 8GB memory kits (total 64GB memory) each • Two 2.13GHZ Processors with eight cores; total 16 cores (virtual processors) each • 100GB SSD internal disk

  7. System z Breakdown SUSE Linux Cognos Server SUSE Linux Cognos database Extra Redhat Server Extra SUSE Server IBMCOP SUSE Server Linux Router SUSE Linux on z Ron Coleman DB2 z/VM z/VM z/OS LP1 LP2 LP3 z114

  8. zBX Breakdown z/os IEDN OSA server z/os IEDN OSA server z/os IEDN OSA server z/os IEDN OSA server P.Blade 1.1 P.Blade 1.1 NIM NIM Power VM Hypervisor Power VM Hypervisor R. Coleman P blade Scaly R. Coleman P blade Scaly Matt Johnson AIX Matt Johnson AIX P.blade 1.2 P.blade 1.2 John’s Hopkins Research John’s Hopkins Research Windows Deployment Server Windows Deployment Server Cognos Framework Manager Cognos Framework Manager zBX zBX X.blade 1.04 X.blade 1.04 Eiel Lauria - OAAI Eiel Lauria - OAAI Tivoli Monitoring Tivoli Monitoring zSentinel zSentinel Ron Coleman - Scaly Ron Coleman - Scaly X Hypervisor X Hypervisor Scott Frank – Underwater Acoustics Scott Frank – Underwater Acoustics Cognos Express Server Cognos Express Server zBX OpenFlow Controller zBX OpenFlow Controller X.blade 1.05 X.blade 1.05 VCL Management Node VCL Management Node SuSE PXE SuSE PXE Network File System Network File System DayTrader DayTrader

  9. Drupal on z114 – First Time Ever!

  10. Sakai on z114 – Mid-East Universities

  11. zSentinel By Christopher Cordisco

  12. Overview • The zEnterprise is a powerful system capable of running many applications concurrently. • zSentinel is a non-interactive web application which gives a simplified layout of real-time resource utilization of the zEnterprise on a single webpage. • It uses REST requests to gather data from the system, and uses a Python web server to send the data to a browser. • The browser displays the data using a combination of traditional web elements and Google Charts. • zSentinel provides a visual layout of the servers running on each host of the system as well as a tabular layout which gives details on each server. It also provides an overview of processor utilization of the mainframe itself. zSentinel is not interactive but designed to run on an external monitor to give an overview of the system at a single glance.

  13. Tree Map • The tree map gives a visual representation of the each host on the system. • Each node represents a virtual server. • The size of the node corresponds to the number of processors allocated to the server. • The color of a node corresponds to the current resource utilization of the server. • It cycles through each host on the system, showing the servers on each at a predefined interval .

  14. Table & Gauges • The table gives a more detailed inspection of the hosts on the system and each server on the host. • It gives the status of each server, as well as the resources allocated to it and its resource utilization. • The gauges show the current processor usage of the Central Processing Complex specifying both the shared and dedicated resources of both the IFLs and CPs.

  15. Behind The Scenes

  16. The Application • The application’s web server is written in Python. • The server responds to simple http requests which returns the resources necessary to load zSentinel in a web browser. • The client uses JavaScript in conjunction with Google Charts to query the web server for the most recent information and display it on the zSentinel webpage. • The server responds to the queries using a Google Charts wrapper for Python which returns the data in the appropriate format to be displayed on the charts.

  17. Unified Resource Manager • The Python application queries the Unified Resource Manager (URM) API for the most recent information on the resource statistics of the zEnterprise using Python’s built in httplib. This is asynchronous with the client’s requests. • After logging in, the URM returns a session ID which is then sent with other all other requests to query data. • The URM responds to the requests with JSON objects. • Python parses the JSON objects into Python dictionaries which are then sent to the client as a Google Chart DataTable object. • Metrics, which need to be updated at shorter intervals, such as processor utilization, are returned in a comma-delimited string for efficiency.

  18. Google Charts • The data to be displayed in a Google Chart is done so through a DataTable object. • A DataTable object contains all the necessary information to create any of the charts provided through the API. • A Google Charts Query is constructed through JavaScript which queries the Python server for the DataTable objects, updating the charts in real time.

  19. ParaBond By Dr. Ron Coleman & Christopher Cordisco

  20. Overview • ParaBond is an application written in Scala. • Its objective is to test the efficiency of concurrently analyzing financial portfolios stored in a database. • It runs on a minimum of two servers. One server contains the database.  The other contains the algorithm which analyzes these portfolios.

  21. Original Setup • Originally, ParaBond was set up among several typical quad-core desktop computers. • One would run the computation, the others would store the database, which was sharded across several computers. • The database used was MongoDB, a nonrelational database. • This setup worked really well, producing efficiency levels over 300%.

  22. New Setup • We wanted to discover how this application would run when placed on the zEnterprise. • A Linux server running on an X blade with 16 cores would run the computation. • This server would query a second server on the zEnterprise running the database on zLinux. • The Database used here was DB2, since MongoDB would not run reliably on the mainframe.

  23. MongoDB to DB2 • In converting this database to DB2, new functions had to be written to build the database and query it. • The conversion to a relational database required an extra table as an associative entity between bonds and portfolios.

  24. Results • Preliminary tests on the zEnterprise were fast, but we did not achieve efficiency levels above 100%. • We are still working on this project in an attempt to achieve efficiency levels equal to that of the original setup.

  25. Predictive Defect Analytics Marist/IBM Joint Studies

  26. Background IBM z/OS operating system • Mature software product Every potential defect recorded • Defect record management system Multiple cycles of extensive testing • System Test Valid Defect Distribution Data Partitioning: • Training – 65% • Test – 20% • Validate – 15% • Same target distribution as original data

  27. Overview Mission: • Automatically predict defect validity upon record creation and send notifications of that outcome in real time to leverage historic empirical test data to drive a smarter test process Team: JamesGilchrist MichaelGildein ChrisRobbins AndrewEvans Marist Joint Studies

  28. Benefits • Increased test efficiency by reducing invalid and duplicate defects • Creation of base infrastructure and process for future analytics work • Visualization reporting to determine future test focus areas • Automate basic defect analysis reporting including release quality tracking

  29. Implementation IBM Cognos: • Report and analyze data and results IBM SPSS Modeler: • Predictive analysis to determine potential validity Prediction Accuracies: • > 100 different algorithm variations executed • ~70% overall accuracy on average • ~90% confidence on average for true positive • Academic field research averages ~70% accuracy in defect predictions Increasing Accuracy: • With more training data • With more data points • With voting scheme combining multiple algorithms

  30. Data Prep Flow Merge pulled data Remove non related defect records Remove obviously erroneous records Derived Target – Valid flag Removed Fields: • Dates • Elapsed times • Manual text fields • Mostly blank • No unique values • > 90% 1 categorical value (+~5% Accuracy Correct) • Fields not applicable Reclassified remaining fields properly through binning: • Blanks & white space • Manually entered categories • Defaults • User error

  31. Physical Infrastructure System z Host Clients IBM SPSS Modeler Server IBM Cognos Server IBM DB2 IBM Data Studio IBM Cognos Metric Designer IBM SPSS Modeler Client IBM Cognos Framework Manager Scripts & Data Crawlers Windows Server zLinux zVM Additional Input Data Servers Defect Record Management Servers

  32. SPSS Models

  33. CognosReports Automated, scheduled, and on demand real time reporting Analysis: • Algorithm Accuracies (release/product) • Confusion Matrices (release/product) • Defect Prediction Densities (release/product) Management: • Non closed defect tables (release/product/team) • Current defect state graphs (release/product/team) Project Managers: • Open vs. closed defects over time (release/product/team) • Test end criteria report • Defect rate over time of release (release/product) Testers: • Hardware trigger vs. root cause (release/product) • Test area vs. root cause (release/product) • Lines of code changed (release/product) • Hardware trigger vs. component (release/product)

  34. Cognos Report Examples

  35. Cognos Report Examples

  36. Questions?

More Related