Computing at HEPHY
Computing at HEPHY. Evaluation 2010. Overview. Computing group at HEPHY The Vienna Grid computing center Future plans. Mission. Provide infrastructure and support for general computing Operate and maintain a highly available and powerful Grid computing center. Computing Group.
Computing at HEPHY
E N D
Presentation Transcript
Computing at HEPHY Natascha Hörmann Evaluation 2010
Overview • Computing group at HEPHY • The Vienna Grid computing center • Future plans Natascha Hörmann
Mission • Provide infrastructure and support for general computing • Operate and maintain a highly available and powerfulGrid computing center Natascha Hörmann
Computing Group • Leader: G. Walzel • Group: S. Fichtinger, U. Kwapil, D. Liko, N. Hörmann, B. Wimmer • General computing: 3.0 FTEGrid effort: 2.5 FTE Natascha Hörmann
Tasks of the Computing group • Run the HEPHY computer infrastructure • general tasks (Network, Mail, Printer), Windows/Mac/Linux desktops, internet services, application server, file system services (AFS, NFS)… • Belle, Theory and CMS interactiv computing facility Natascha Hörmann
Tasks of the Computing group • Provide the infrastructure for the worldwide distributed Grid computing as a Tier-2 center • computing power and storage for the CMS experiment • essential computing resource for our physics analysis groups at HEPHY • resources also for other applications (e.g. non HEP) Natascha Hörmann
Resources at the HEPHY Grid computing center • Available: 1000 CPUs, 500 TB disks Computer: Sun 2x4 Core, Intel Xenon 2.5 GHz Storage: Supermicro RAID, DataPool Manager (DPM) Operating System: SLC5 • Presently operated: 730 CPUs, 320 TB disks • ? • In 2010 the Academy has decided to charge the costs for electricity to our institute (15 % of the institute material budget); for budget reasons we operate only ⅔ of the equipment • The current hardware is four years old and has to be replaced starting 2012; funding is required but not yet secured Natascha Hörmann
The Grid Tier-2 @ HEPHY • Tier-2 @ HEPHY provides 2 %of overallCMS Tier-2 capacity • Average availability is 98 % • among the best sites • About2.400 Jobs/day • HEPHY • 6.000 • 5.000 • 4.000 • 3.000 • 2.000 • 1.000 • 0 • Jobs • 2.400 Jobs/day • Analysis • MC simulation • Job-Robot • days Natascha Hörmann
Data transfer at the Tier-2 @ HEPHY • Outgoing transfer – Vienna–> Tier-1/2s • Incoming transfer – Tier-1/2s –>Vienna • Averagetransfer of about2 TByte/dayincoming and1 TByte/dayoutgoingdata • 30 • 25 • 20 • 15 • 10 • 5 • 0 • 100 • 80 • 60 • 40 • 20 • 0 • 10.5 MB/sec • MegaByte/sec • 22.9 MB/sec • days • days Natascha Hörmann
Grid infrastructure for scientific groups • Current Projects • Future Projects High Energy Physics - CMSSUSY Quarkonia b-Tagging Stefan-Meyer Institute Panda at FAIR Tier-2 HEPHY* TheoryCP violation High Energy PhysicsBelle II Medical ApplicationHadron tumor therapy studies * Federated Tier-2 together with the University of Innsbruck Natascha Hörmann
Hadron tumor therapy studies using Grid@HEPHY • Cooperation withMedical University of Vienna inconnection with the radiation therapy center MedAustronin Wiener Neustadt • Simulation studies of energy deposition in material with heavy charged particles like protons or carbon/helium ions looking especially at the fractional tail Results submitted to Z. Med. Phys. Natascha Hörmann
Advantages of running a Tier-2 @ HEPHY • Grid computing site for CMS physics groups • selected as one of the computing cluster for SUSY (1 of 5)andb-tagging (1 of 4) • the relevant datasets of these groups are stored and the analysis jobs are executed at our site • Infrastructure for our local physics analysis group • direct access to resources enhances successful contributions from HEPHY physics groups to SUSY, b-tagging andQuarkoniaanalysis • enough storage is available to store our own analysis results • control the usage of the computing power and storage in case of important findings or necessary resources for conference Natascha Hörmann
HEPHY CMS Center • Commissioned to run general shifts for the CMS experiment in Vienna (saves travel costs) Natascha Hörmann
Future requirements • LHC computing requirements for the next years • until 2012: an increase of 60 % CPUs and 100 % disk space is needed • weassumereplacement of equipmentwithconstantbudget(withthetypicalincrease of performanceeveryyear) Expected needs in 2011 & 2012 • CPU • Disk • ICHEP2010, 28 Jul. 2010, Paris, Progress in Computing, Ian Bird Natascha Hörmann
Future AustrianComputing Landscape • Vienna Scientific Cluster (VSC) • will hostour Tier-2 computingcenterthroughourconnections to the Vienna UT • theinstallationwithnewequipmentisplanned to start in 2012 at the VSC location (technical stop of LHC); providedfundingissecured • Austrian Center forScientificComputing(ACSC) • initiative fromseveraluniversities • is a commonframeworkforcomputingcooperationbetweeninstitutes in Austria and abroad • HEPHY intends to bepart of ACSC via VSC Natascha Hörmann
Summary & Outlook • The HEPHY Grid computing center is smoothly running and allows us to participate in the analysis of data in the first row • importantforourphysicsanalysis • importantforourpositionin theCMS collaboration • Funding of the Grid Tier-2 upgrade by replacement of the hardware needs to be secured • HEPHY plans to install the new equipment of the Grid computing center at the Vienna Scientific Cluster (VSC) in 2012 • HEPHY intends to participate in the Austrian Center for Scientific Computing (ACSC) which is important for our future computing interests Natascha Hörmann