CERN
Data is CERN’s most precious commodity. Quantum StorNext is instrumental in collecting that data quickly and reliably, thereby enabling the scientific community to understand and exploit new ideas and discoveries.
Accelerating Scientific Discovery With Stornext
StorNext Helps CERN Accelerate Particle Research
When it comes to exploring the origins of the universe, managing massive volumes of data can be a galactic task. After much research and testing, CERN, the world’s leading particle physics laboratory, chose StorNext software for data management. The result: A system that allows voluminous data to be accessed and shared in a high-performance computing environment.
Billions of Data Bits Every Second
At CERN, the European organization for nuclear research headquartered in Switzerland, one of the experiments under way is known as ALICE (A Large Ion Collider Experiment). Devoted to researching the physics of matter by accelerating particles and causing them to collide, ALICE involves an international collaboration of more than 1,000 physicists, engineers, and technicians from 30 countries. Together they contribute to resolving one of the earliest challenges in fundamental physics: recounting the birth of matter.
Using detectors, ALICE collects massive amounts of data that is generated by the particle collisions. Pierre Vande Vyvre, Project Leader for ALICE Data Acquisition, was tasked with designing an information management system with rock-solid data acquisition, selection, transfer, storage, and handling to manage the billions of bits of data generated every second.
To further complicate requirements, the ALICE experiment takes place three kilometers away from CERN’s main computer center where the data resides on mass storage systems. Another challenge in the data acquisition process was that the Linux file system in place was not sufficient to share data quickly and easily between nodes.
Benefit: Fast, Effective Data Acquisition
The CERN team investigated several different options available for a clustered file system (CFS). The main requirements for the CFS were maximum aggregate bandwidth performance; a minimal footprint for the hardware equipment (the ALICE Data Acquisition room was fairly small); scalability to manage the volume of clients (up to 100 user nodes); and—most importantly—independence between the CFS and the underlying hardware.
After several weeks of thorough testing, the team selected StorNext. During the initial stage of implementation, the team began with one server, one client, and one disk array. The system now consists of 180 Fibre Channel 4G ports and 75 transient data storage arrays. There are 105 nodes accessing data over Fibre Channel. The ALICE storage architecture also has 90 StorNext distributed LAN clients accessing data over IP.
One of StorNext’s key benefits comes from its Affinity feature. The feature allows the team to direct data to specific primary disks, by writing to the affinities’ associated relation point. This means that all machines can operate at maximum performance at all times.
“StorNext is delivering the high-speed, shared workflow operations and large-scale, multi-tier archiving required by ALICE,” said Vande Vyvre.