news Global technology giant HP this morning revealed that a consortium of educational institutions in Victoria had selected its Converged Infrastructure stack to build a high-performance computer (HPC) system to be named ‘Trifid’ that would aid with the processing of massive research problems.
The consortium includes the Victorian Partnership for Advanced Computing (VPAC), which describes itself as a ‘research service provider’ delivering services in the application of high-performance computing for research needs, as well as La Trobe University and RMIT University. In a statement issued by HP, VPAC said it required new technology to keep up with researcher’s demands; as well as facilitating lowering the total cost of ownership for its IT infrastructure.
To meet these requirements, HP said in its statement, the consortium identified the need for a new HPC and selected HP Converged Infrastructure to power the new system, named “Trifid”. HP partner Frontline delivered the solution which includes HP ProLiant Gen8 Servers and HP Networking solutions under a four-year hardware agreement. Trifid, the statement said. will revolutionise Australian research capabilities by compressing calculations that would take a human more than one million years to complete, into a single second. Australian universities will be able to tackle some of the biggest challenges facing both science and industry.
“The competitive nature of research demands access to the world’s most advanced facilities. The ability to deliver results more quickly can be the difference between making a globally significant discovery and simply verifying the outcomes of someone else’s discovery,” said Dr. Ann Borda, chief executive of VPAC. “With Trifid, Victorian researchers now have the tools to revolutionise our understanding of social, scientific, engineering, and medical complexities through computational simulation and modelling.”
According to RMIT’s executive director of IT Services Brian Clark: “Trifid will provide our researchers with a ten-fold capability increase that enables them to better understand a wide range of phenomena which are important across disciplines such as materials, design, engineering, science, and medical applications.”
“Research at La Trobe aims to make a difference in pressing global problems. Linking disciplines and strong researchers is a key to us reaching our research goals and cross-disciplinary approaches often require the latest in computational power to explore and find the best solutions. Infrastructure like Trifid is vital in supporting the research that we undertake at the University,” said La Trobe’s deputy vice chancellor (Research), Professor Keith Nugent.
Trifid will deliver 45.9 TFLOPS of performance through 180 nodes of the latest Intel Sandy Bridge Enterprise processors and features a FDR (Fourteen Data Rate) Infiniband. The solution includes HP ProLiant SL230s Gen8 High Performance Blade Servers, HP ProLiant SL250s Gen8 GPU Nodes, the HP ProLiant S6500 Chassis, HP ProLiant DL380p Gen8 Management Nodes, HP Intelligent Ready Racks as well as HP 3800-48G-4SFP+ 10GbE Switches and HP 5406zl 10GbE Switches.
The news comes as it appears that Australian organisations are developing an increasing appetite for high-performance computing solutions. In June 2012, for example, the Australian National University bought a supercomputer capable of 1.2 Petaflops of processing power from Japanese giant Fujitsu, in a deal which was expected at the time to create the largest supercomputer of its kind in the Southern Hemisphere.
And in April only several months before that, the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia’s national science agency, revealed that it had chosen Australian company XENON Systems to upgrade its existing supercomputer platform.
It’s often thought that large enterprises do the most intense computer processing work, but if you look under the hood at campuses around Australia, you’ll find these kinds of supercomputers humming away behind the scenes. It’s really quite impressive when you think of the sheer amounts of data being crunched in these kinds of systems.