What Kind Of Technology To Use For A Supercomputing Center.
For first two decades of computer development allowed computers to became smaller and faster. In fact this was predicted by what is known as Moore’s Law. This prediction came about by observation that the number of transistors in a dense integrated circuit doubles approximately every two years. There is a limit however to how small you can make semiconductor devices before you are on the molecular level. I mean atoms and electrons have a physical size themselves.
In order to progress on speeds further, engineers have been using something called parallel computer design or parallel processing. Parallel processing is a mode of computer operation in which a process is split into parts that execute simultaneously on different processors attached to the same computer.
A competing but very exploratory computer design is also being developed that if successful will make today’s supercomputers seem like the abacus of old. This may be a slight over exaggeration but you get my point right? Quantum computers are predicted to be able run 100 million times faster than my HP laptop that use for this website. This however has not stopped researchers from further development of parallel processing supercomputers like the IBM SuperMUC.
Faster computers using artificial intelligence routines in the areas of deep learning and neural networks are need to take research forward in areas of space exploration, computer security and much more.
Leadership at the Leibniz Supercomputing Center (LRZ) of the Bavarian Academy of Sciences and Humanities announced today that they have signed a contract with Intel and Lenovo to build SuperMUC-NG, the next generation of the centers leading-edge supercomputers.
The older version was not as fast or powerful.
SuperMUC-NG will be capable of 26.7 petaflops at its theoretical peak, or more than 26 quadrillion calculations per second. This represents a five-fold increase in computing power over that of the current-generation SuperMUC at LRZ. According to LRZ Director Prof. Dr. Dieter Kranzlmüller, SuperMUC-NG will provide researchers from a variety of scientific fields with significantly more capabilities.
“Our new supercomputer, SuperMUC-NG, will provide more compute power for scientists but will also require more expertise,” he said.
Researchers will be able to tackle problems that are more complex, and to that end LRZ experts will assist them, providing an interface between the scientific community and computer science. We are well-prepared to support scientists in achieving the next level of supercomputing and as part of the project we will again extend our user support team.
LRZ, one of the three major German computing centres that constitute the Gauss Centre for Supercomputing (GCS), received funding for SuperMUC-NG’s acquisition from GCS as well as the German federal government and the state of Bavaria, totalling €96 million over the machine’s six-year lifecycle. GCS assumed half the cost with the two governments matching the other half.
The Building Of SuperMUC Supercomputer.
Like its predecessor, SuperMUC-NG will use warm-water cooling, helping LRZ to further reduce power consumption and its carbon footprint (it will also reuse the heat generated on the machine to help generate cold water). Hardware supplier Lenovo focused the cooling concept on sustainability.
Scott Tease, Executive Director of HPC and AI at the Lenovo Data Center Group said:
As an HPC hardware supplier, we concentrate on innovations related to performance, reliability, and sustainability, All three themes come together in the context of our collaboration with LRZ and Intel to build out SuperMUC-NG.
The machine will consist of 6,400 compute nodes based on the Intel Xeon Scalable Processor that will be connected by Intel’s Omni-Path network, using a so-called “fat tree topology. The system will be outfitted with more than 700 terabytes for its main memory, and have 70 petabytes of disk storage. Using next-generation interconnects and providing greater storage capabilities means that LRZ will be better-suited than ever before to address the increasingly difficult challenges of data management.
“We are happy to contribute an essential part to this important project, and in turn support the work going on at the Leibniz Supercomputing Centre,” said Hannes Schwaderer, Head of Enterprise Sales at Intel Deutschland GmbH.
“Processing this data requires immense computational power. Next-generation Intel architecture will play an important role in helping to address data challenges across a broad spectrum of user needs.”
New Technology For Supercomputer Center.
The current-generation SuperMUC has made a major impact on many research areas. Its next generation will enable researchers to dramatically expand the scale and scope of their investigations.
For instance, a team led by Technical University of Munich Professor Dr. Michael Bader and Ludwig-Maximilians-Universität Munich researcher Dr. Alice-Agnes Gabriel used the current-generation SuperMUC to create the largest, longest ever multiphysics simulation of an earthquake and its resulting tsunami. The team recreated the 2004 Sumatra-Andaman Earthquake computationally, and was awarded best paper at SC17, the world’s premier supercomputing conference, held this year in Denver, Colorado, USA.
Bader indicated that next-generation machines would help his team run many more iterations of its simulations. By testing its models with larger numbers of input parameters, he anticipates being able to achieve a better understanding of how earthquakes and tsunamis develop. Ultimately, this could lead to real-time solutions for mitigating their risks.
“Currently, we’ve been doing one individual simulation at a time, trying to accurately guess the starting configurations for things like the initial stresses and forces, but all of these are uncertain,” he said.
We would like to run our simulation with many different settings to see how slight changes in the fault system or other factors can impact the study, but such large parameter studies would require yet another layer of performance from a machine.
This team, and the geophysics community in general, is just one of many research domains that will benefit from SuperMUC-NG.