The Networks That Run the Hadron Supercollider
One of the most interesting bits of science and technology news to come along in awhile is the activation of the Large Hadron Collider (LHC) in Western Europe. For those of you who might have been distracted by financial or presidential election news these past few weeks, the LHC is a gargantuan device designed to accelerate various kinds of particles to nearly the speed of light and crash them into each other to gain more insight into the physics principles that govern the universe.
The project, which is being conducted by the European Organization for Nuclear Research (CERN), has 10,000-plus engineers and scientists working on it. If the LHC works the way the people who dreamed it up think it will, it should reveal a great deal more about some of the core elements of physics, such as the forces that hold the universe together, matter and antimatter, and dark matter and dark energy.
Some critics of the LHC, however, have warned that it could create tiny black holes that will expand until they consume the entire earth — and presumably much more.
Of course, whether to bring to light the mysteries of the universe or drown our world in darkness, the LHC actually has to work. The device was fired up in September, but was shut down just a couple of days later due to a defective connection between two of its magnets. It’s expected to get going again in the spring, but it will take several weeks for the particles to get up to their top speed in the 17-mile circular tunnel.
Mechanical failures notwithstanding, the LHC is a fascinating piece of work, not least because of what goes into constructing something so vast. Take the collection of networks that runs the machine and monitors its experiments. As one might imagine, it is incredibly complex. The LHC runs on a grid computing system — dubbed the LHC Computing Grid (LCG) — that involves a loose cluster of geographically distributed machines. This approach is appropriate, given that grid computing often is applied to so-called “grand challenge problems,” or conundrums that are extremely difficult to resolve and require exceptionally multifaceted, nonintuitive solutions. And as challenges go, they don’t come much bigger than the brass ring that the LHC team is grasping for.
The LCG has been in operation since 2002 and presently integrates thousands of computers spread out across 200-plus sites in more than 30 countries, according to its Web site. Whenever the LHC gets going, the LCG will produce about 15 million GB of data each year — about enough to fill up the equivalent of more than 20 million standard CDs. That data will in turn be analyzed by 100,000 processors.
The people involved with LHC see the Grid — as they lovingly call it — as eventually having an impact that goes well beyond this project. Ian Halliday, chief executive of the U.K.’s Particle Physics and Astronomy Research Council, predicted the LCG “will have a profound effect on the way society uses information technology, much as the World Wide Web did.”
Presuming, of course, that we aren’t enveloped by black holes first.
– Brian Summerfield, email@example.com “