Skip to main content

Using gaming technology to understand
the universe

Seneca student speeds up simulation software used by CERN

Section of Large Hadron Collider tunnel — Image: CERN

Work done at Seneca, using the hardware usually found in video game computers, could one day lead to a better understanding of our universe.

Bradly Hoover, a Computer Programming and Analysis graduate and currently a Software Development degree student in the School of Information & Communications Technology, has found a way to take advantage of fast gaming processors to significantly speed up calculations being done at CERN, the European Organization for Nuclear Research in Switzerland.

Bradly’s work focuses on a program called Travel that simulates the tracking of groups of sub-atomic particles. This information is used by CERN in its linear accelerators that are the source of particle beams used in the massive 17-mile-long Large Hadron Collider, which smashes atoms —  at nearly the speed of light — into elementary particles.

Chris and Bradly

Professor Chris Szalwinski and Bradly Hoover

Seneca became involved when Bradly expressed a desire to try something beyond his classroom assignments. Professor Chris Szalwinski reached out to a contact at CERN to see if there were any projects Bradly could work on. Several programs that could benefit from updates were suggested, but one stood out.

Travel has traditionally been run on computers with central processing units (CPUs), and it could take a long time to finish the larger and more precise simulations. Chris saw the challenge presented by CERN as an opportunity to implement a programming technique that has been gaining in popularity: the use of gaming technology to do the kind of repetitive calculations required by Travel.

The latest video game computers process a great amount of data quickly in order to draw and animate complex images and action.

The drawing work is handled by a graphics processing unit (GPU) that executes its tasks in parallel — handling many sets of information simultaneously — which makes the GPU much faster than a CPU for certain kinds of work.

It took the better part of a semester to re-write some modules of the program.

CMS detector at CERN

A section of the collider shown during maintenance suggests the complexity and scale of CERN's operations 
— Image: CERN

The first obstacle Bradly faced was Fortran, the computer language used to create Travel.

“It’s an older language, one not taught at Seneca” he says. “So I had to get familiar with it.”

That was followed by weeks of planning and re-writing the compute-intensive parts of Travel for use on a GPU, using modern programming languages like C++ and Nvidia’s CUDA®.

The result was an increase in performance that was 72 times faster in certain instances. In other words, what would have taken months could now be accomplished in days.

Bradly’s work has been presented to CERN, is posted on its website and his recommendations are being considered. Whether his re-write of Travel will find widespread use, or perhaps influence the next generation of programs used at CERN, he is glad to have worked with a leading edge organization and technology that has such unlimited potential.

“CERN is the pinnacle of science and innovation, and to contribute to that as a student is amazing,” Bradly says.

Chris says this project reflects the dawn of more software developers programming for multi-core CPUs and many-core GPUs. As a result, a number of courses taught in the School of Information and Communications Technology include instruction in both.