1 / 20

Katie Antypas and Kirsten Fagnan

Draft JGI NERSC User Survey Results. Katie Antypas and Kirsten Fagnan. NERSC 2012 User Survey. 7 6 5 4 3 2 1. Very satisfied. Minimum NERSC User Satisfaction Target. Neutral. Very dissatisfied. NERSC Survey open through January 2013 Responses from: 444 Regular NERSC (MPP) Users

kalani
Download Presentation

Katie Antypas and Kirsten Fagnan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Draft JGI NERSC User Survey Results Katie Antypas and Kirsten Fagnan

  2. NERSC 2012 User Survey 7 6 5 4 3 2 1 Very satisfied Minimum NERSC User Satisfaction Target Neutral Very dissatisfied • NERSC Survey open through January 2013 • Responses from: • 444 Regular NERSC (MPP) Users • 25 PDSF (High Energy Physics) Users • 87 JGI Users • ~90 questions on survey • Participants do not have to answer every question • Each question scored from 1-7, very dissatisfied to very satisfied

  3. Top Scoring Areas 7 6 5 4 3 2 1 Very satisfied Minimum NERSC User Satisfaction Target Neutral Very dissatisfied Top scoring areas are in consulting, security and HPSS

  4. Top Scores JGI Users vs MPP Users Strong overlap between JGI users and MPP user’s rankings with file systems notably missing from JGI high scores

  5. Lowest Scoring Areas 7 6 5 4 3 2 1 Very satisfied Minimum NERSC User Satisfaction Target Neutral Very dissatisfied Lowest scoring areas are primarily I/O related

  6. Lowest Scoring Areas: JGI vs MPP Users Little overlap between low scoring areas, though complaints about NX are common to both sets of users.

  7. Year to Year comparison: Disk Configuration and I/O performance Very satisfied Minimum NERSC User Satisfaction Target Neutral Very dissatisfied N=51 N=59 N=23 N=343

  8. Year to Year comparison: HPSS Satisfaction N=33 N=23 N=16 N=209

  9. Year to Year comparison: Satisfaction with batch queue structures N=49 N=62 N=23 N=367

  10. Year to Year comparison: ability to run interactively N=32 N=51 N=22 N=265

  11. Year to Year comparison: Application Software N=49 N=42 N=11 N=349

  12. Year to Year comparison: Satisfaction with software environment N=53 N=44 N=19 N=384

  13. Year to Year comparison: Performance and Debugging Tools N=36 N=31 N=11 N=270

  14. Year to Year comparison: Overall satisfaction with NERSC N=59 N=86 N=24 N=478

  15. 23 scores are under NERSC’s User Satisfaction Target

  16. MPP users had 1 score below NERSC User Satisfaction Target Hopper batch wait time – 4.90

  17. Comments – What does NERSC do well? Kirsten and Doug are absolutely fabulous. They are always willing to help and have great communication skills and go the extra mile. consulting/support gives fast response website is useful, the "genepool completed jobs" feature is very nice The staff we have serving JGI are great. The reliability of the hardware is tougher to handle. NERSC does a great job at providing its users what they need. I work at the JGI, and I think NERSC was presented with challenging task in providing us the necessary resources. I feel that they have done a great job, albeit imperfect, at accommodating our needs. Access to training, troubleshooting and consulting. NERSC is willing to very quickly rollout modifications to system configurations, queuing policies, etc., to deal with problems as they arise. Put in effort to listen and try to improve. Providing bulk CPU power to manage our data processing. The email communications announcing changes and downtimes and updates are frequent and informative. The support from the NERSC staff is great. Quick responses to tickets and emails are appreciated. Excellent technical and applications support, outreach to users. Doug and Kirsten are very responsive. Maintains many software variants within Linux modules, e.g. "module load X" or "module load Y" helps manage software environments.

  18. Comments – What else do you need from NERSC to accomplish your scientific goals? improve communications with customers, increase transparency in decision making regarding systems/policies/etc, actively involve customers in decision making. projectb/, genepool and genepool queue as well as network access for the gpints all have to have better uptime. The nearly-daily outages in one or more components cause unacceptable delays to research. Bioinformatics does not have one-size-fits-all compute needs. Every task is different and has different RAM, storage, I/O, CPU/core needs. In order to be able to push our compute analysis forwards, we need a highly flexible environment. All I need is a few more gpint type machines with the larger memory. More memory per node and more cores per node. Or at least, more nodes with high memory and as many cores as possible. Please increase /projectb stability (buy better hardware if necessary); bioinformatics analysis is very data-driven and produces many large files, the analysis of which can be time-consuming because not all analyses are easily parallelized on a shared memory cluster (but better on nodes with many cores). Would like better stability and robustness of file systems. Better real-time and post-job monitoring tools would be of great help. As all computing resources have been centralized to NERSC, NERSC has shoulder an enormous amount of responsibiility and it is very important that NERSC staffs have a real "business-like" service mentality. Every action should have planned testing and contingency. Better uptime/reliability Some users are not using the system optimally and are hurting other users. There needs to have a mechanism to take notice and also invest in imparting the better (not neceesarily the best) practices.

  19. Training and Tutorials Feb. 10, 2012 “Getting started using the resources at NERSC” Feb. 17, 2012 “Archiving data and using the new /projectb file system” Feb. 24, 2012 “Best practices for using a fair share batch system” Mar. 2, 2012 “Introducing the Genepool webpage and review of batch system best practices” Apr. 11, 2012 “Using NX to create a virtual desktop” Apr. 17, 2012 “Why is my job not running?” May 9, 2012 “How to get help from NERSC” June 20, 2012 “Running jobs and system stability” June 27, 2012 “Using Globus Online to transfer data” July 9, 2012 “Running on other NERSC systems, Allocations” September 19, 2012 “NX training” September 21st, 2012 “NERSC training on modules, checkpointing, data transfer, ServiceNow, Profiling tools, Security” January 11, 2013 “Introduction to the new hardware and TaskfarmerMQ February 12, 2013 Maintenance day tutorial featuring NERSC information as well as advanced topic presentations by JGI staff (most successful training with ~50 participants and great feedback)

  20. JGI/NERSC HPC Study Group Meets Fridays from 12-1pm in 100-149C. Led by Kirsten Fagnan, Seung-Jin Sul, Rob Egan and Douglas Jacobsen. We have lectured on computer architecture, shared and distributed memory parallelism, OpenMP, MPI, Profiling with tools like gprof, performance analysis, benchmarking, profiling Perl and running perlcritic, workflow management with taskfarmermq, Infiniband vs. ethernet Current focus is peer-review of software run at the JGI – first analysis being done on the metagenome pipeline run by the genome assembly group. Three pairs of study group participants are tackling different questions about the code. Questions, results and analysis are being tracked in version control.

More Related