Computing for the Large Hadron Collider

DfgDfg Admin
edited August 2012 in Tech & Games
[h=2]Interview: CERN's deputy IT head talks Higgs, Moore and the cloud.[/h]
Information technologists at the European Organisation for Nuclear Research (CERN) face a tough challenge in the wake of its discovery of a "particle consistent with the Higgs Boson" last month.
CERN has committed to keeping its Large Hadron Collider (LHC) online for several more months, as physicists conduct yet more experiments to determine precisely what the discovery means.
The commitment requires CERN's IT team to once again adjust planning around the lifecycle of IT infrastructure to meet an extended project deadline.
iTnews’ Brett Winterford sits down with CERN’s deputy head of IT, David Foster, to discuss how it plans to stay ahead of the technology curve.
Check out Brett's photos of CERN's 'Tier Zero' data centre here.
What does the recent Higgs Boson discovery mean for IT?
What was discovered was a particle at an energy that seems to be compatible with what we understand the Higgs should be. All of this – apart from the fact that a particle exists at that energy – has to be verified.
Yes, it’s a discovery, something new and interesting that seems to be compatible with the Higgs, but the experiments slightly disagree as to what energy it resides in.
So we need to do a lot more measurements. It’s all about data. You need to collect a lot of event data to have a statistical chance of seeing the same outcome enough times to prove that what you’ve found is real.
We’re going to keep the accelerator running for three more months longer than planned into 2013 at current energy levels.
The LHC was supposed to shut down in 2013 so we could upgrade the accelerator to the highest energy available. We are currently operating at 4 TeV per beam in each direction, which means a collision generates a sum of 8 TeV. We originally intended to move to 7 TeV in each direction (total of 14 TeV in a collision) by the end of 2014. But the success of the current experiments has convinced us to stay the course at the current energy levels.
The discovery of the new particle will require intense analysis work. It will certainly require at least what we have today in terms of computing, disk and network and will continue the trend towards increased capacity in all these area going forward.
Can you give us some idea of the current data processing requirements for the LHC experiments?
When the LHC is working, there are about 600 million collisions per second. But we only record here about one in 1013 (ten trillion).
If you were to digitize all the information from a collision in a detector, it’s about a petabyte a second or a million gigabytes per second.
There is a lot of filtering of the data that occurs within the 25 nanoseconds between each bunch crossing (of protons). Each experiment operates their own trigger farm – each consisting of several thousand machines – that conduct real-time electronics within the LHC. These trigger farms decide, for example, was this set of collisions interesting? Do I keep this data or not?
The non-interesting event data is discarded, the interesting events go through a second filter or trigger farm of a few thousand more computers, also on-site at the experiment. [These computers] have a bit more time to do some initial reconstruction – looking at the data to decide if it’s interesting.
Out of all of this comes a data stream of some few hundred megabytes to 1Gb per second that actually gets recorded in the CERN data centre, the facility we call ‘Tier Zero’.
How much of the data is stored and processed on site at Tier Zero?
For the first time in particle physics, not all the processing for the data generated by these experiments could be done on-site. It simply requires too much processing power. So what we have is the concept of a grid, where we tie together all the computing resources of collaborating institutes so that it essentially looks like one pool of infrastructure.
The filtered raw data comes into CERN’s Tier Zero data centre, but it is also sent out in almost real-time to eleven other large ‘Tier One’ data centres. These large data centres provide storage and long-term curation (back ups) of the data.
Other data products flow down to Tier Two centres. The Tier Twos provide CPU and disk for analysis and simulation. There are around 150 Tier Two data centres on the grid, made up of other labs, institutes and universities all over the globe.
So there is a continuous evolution of the data. You reconstruct the events, filter out noise, group data sets together and create new data sets.
There is also always at least a second or third copy of the raw data distributed on the grid.
We have a distributed back-up system. You don’t have just one copy of the ATLAS data at the CERN site, that data is distributed at various sites that support the experiment. There is always one copy at CERN and a number of other copies elsewhere on the grid.
What defines this system as a ‘grid’?
If I’m a physicist and I have some processing I want to do on some data, I submit that job to a physical machine that knows about the whole infrastructure. It knows at what site that data has been stored or replicated to on the grid, and it sends my job to one of those sites. It will be executed there against the data, and the results sent back to me. So jobs are flying all over the place.
What makes it a grid is that the user doesn’t see which server or storage system is being used, but rather submits a job and the system takes care of the execution of the job somewhere on a worldwide basis.
To what degree is the data processed in real-time?
It occurs in CERN at real-time, and I guess you would call it pseudo-real time out to the Tier One facilities and a cached batch process out to Tier Twos.
Essentially, what the system allows is almost real time analysis of the data. And that is a new innovation. We are able to go from taking the raw data to actually writing the research papers in an extremely short period of time. Sometimes we are showing results at presentations within weeks or days of the raw data being captured. This was never possible in the past.
Grid computing has enabled the very fast production of results into hands of physics community. And that’s been very spectacular.
You mentioned that the LHC has been nearly 30 years in design and construction. How can you predict on day one what technology will be available once the accelerator and experiments have been realized?
Generally when the initial planning is done for a physics idea, the technology is not necessarily available. What you are relying on is that the time it takes to design the accelerator and experiments is so long, by the time you need the technology it will be available.
So Moore’s law has been very important to us, and it has proven real. We have been relying on the doubling of processing speed every 18 months. And these days we are looking at parallelization using multi-cores to keep the trend going.
But it’s the evolution of networking that made the grid realisable.
In the late 90s we anticipated the networking we might have available to us for the LHC would be 622 Mbps. But what happened was the deregulation of the telecom industry in the late 90s and 2000s. That totally changed the landscape: instead of a few hundred Mbps, we found that we could have gigabits.
Without this step-change in networking, it wouldn’t make sense to move all this data around. It enabled a change of the business model for computing. We were quick to realize that and adapt the way we did computing in order to capitalize on that. That decade from 1998 to 2008 was formative to developing a new concept of collaboration on a global scale.
Was the particle physics community open to such collaboration?
This is one characteristic of the high-energy physics community – it is highly collaborative, highly distributed and highly mature in its methods. We were able to build the LHC computing grid because of both the technology becoming available, plus the will and desire of people to become a part of that collaboration.
Everything in high-energy physics has been a major collaboration, and in that sense it is quite different to a lot of other scientific disciplines. For example, any experiment at the LHC tends to be a major collaboration in its own right. The number of people working on an experiment at the LHC - dedicated to a detector they are building and the results they are seeking - is as big as the total staff of CERN.
To what degree do these experiments also compete?
From an IT perspective, each experiment – be that ATLAS, ALICE, CMS etc - develop and design their own computing solution. That’s part of the competitive advantage of each experiment.
ATLAS and CMS for example, are both looking to solve similar physics problems. They do not communicate during experiments to avoid systematic errors and prevent bias creeping into the analysis. They do not reveal results to each other until the data is analysed, but then they issue joint press releases. So it’s a mix of collaboration and competition. Its co-opetition.
Read on for a discussion on CERN's use of cloud computing...
Did CERN develop its own software for operating the global compute grid?
For the last ten to 12 years, there has been a continuous stream of EU investment in middleware production.
Very early on there were EU-funded projects to help develop the grid middleware. It started with the European Data Grid at around 2000. After that there were three more EU projects to develop middleware under a project titled Enabling Grids for E-Science in Europe (EGEE).
Following that there was a separate middleware project that continues today – the European Middleware Initiative – which manages software development and collaboration around the middleware, and the EGI or European Grid Initiative, which handles the operational aspects of the grid such as account management and monitoring.
What this consistent funding has produced is a robust and organic system. At any time there may be sites not available, sites down for maintenance, network problems or new sites joining – all of this means the grid has to be very dynamic. It has to accommodate a resource pool that changes almost in real time. If a job fails, it can be rescheduled at a different place.
Were commercial packages available back when this was developed?
We don't tend use commercial complete solutions. Wherever a commercial solution emerges that is viable for us to use, we won't develop our own. But grid computing technology was and is still very new in commercial terms. So it has required some of our own investment.
Who owns the middleware?
Most of the grid middleware developed by EMI and used by LHC is released under the Apache Software Licence v 2.0 (ASLv2). The ASLv2 licence allows redistribution of the software as part of commercial products. Although the original redistributed code must retain the ASLv2 licence, any work derived from it can be licenced under different, possibly commercial, software licences. The choice of adopting ASLv2 was explicitly made to allow commercial companies to include the grid middleware in their products.
Currently no fees are paid for using the software and no patents have been filed to my knowledge.
Has this software been exploited for use by cloud computing providers?
It’s important first to note the distinction between cloud and grid computing.
Grid computing is the connection of resources owned by the community. In grid computing, we both have computers and share those two resources to optimize use of our capital expenditure. We both have a big CapEx invested, and we both want to optimise use of that CapEx.
In cloud computing, computing resources are bought and sold over the network. A third party has spent CapEx on infrastructure and customers buy those resources on an OpEx basis.
The distinction is predominantly one of a business model. From a technical point of view, there is rather less of a difference. The underlying technology isn’t so different. Cloud computing is another interface to get a raw virtual machine or application. Its an interface to get work done, where you can re-purpose processers to do different jobs.
You could say that the defining difference from a technical perspective is also that cloud computing can be rather more interactive, whereas grid computing is entirely batch in its current form.
The paradigm shift for both was advances in networking. We saw a potential for a cloud business model very early on. Once there was a big market of people with access to high performance networks, service providers could make computing available at massive economies of scale through huge data centres. That reduces OpEx to a minimum for the customer. Everyone recognizes this as a big way forward. You can do so much more when you treat computing as an Operational Expense.
Does CERN make use of the cloud, even with your large CapEx investment?
So far we are experimenting with cloud computing. There are some practical issues associated with cloud computing. One is that it is not in the interests of cloud computing providers to provide a homogeneous interface to cloud. They see a competitive advantage through the specialized interfaces they provide. Switching from one cloud provider to another is problematic.
Further, to move the amount of data we produce in and out of a cloud is prohibitively expensive. The cloud models currently are well weighted towards the cost of moving I/O.
And finally, we have a very large capital expenditure. We have all the skills and the equipment, we do things ourselves, so we are probably not going to pay a premium to do the same thing with a cloud provider. It always must be more expensive at a certain scale with a cloud provider.
All that being said, the fact cloud is an OpEx play is an interesting model for new, smaller scale science experiments. Instead of hiring sys admins, buying hardware, paying for a hosting environment and a network expert, perhaps you just buy the computing you need – its more convenient. Why would you invest your capital in your own infrastructure?
Whether it works for is a question of application, scale and risk. It depends on scale and complexity of the data and processing you need to do. You might be a four-person team, but you might also be doing an experiment generating petabytes of data. So yes, you might be a small outfit, but it’s probably cheaper to hire two more people and build your own infrastructure than buy from a cloud.
There will be more use cases as cloud producers push down the prices. We’re clearly in a premium now.

Link: http://www.itnews.com.au/News/310769,computing-for-the-large-hadron-collider.aspx/0
Sign In or Register to comment.