News

High Speed Computing on the Oregon Coast
May 1, 2018

High Speed Computing on the Oregon Coast

High Speed Computing BlogLast week, I had the unique opportunity to attend and speak at the Salishan Conference on High Speed Computing at the Salishan Lodge on Gleneden Beach, Oregon. The conference was founded in 1981 and this year’s theme was “Maximizing Return on Investment for HPC in a Changing Computing Landscape,” with the majority of attendees hailing from Los Alamos, Lawrence Livermore, and Sandia National Laboratories. From many of the talks, I got the distinct impression that the topic of ROI may have been an undercurrent of virtually all 27 past conferences. Given the nature of the computational simulations they are undertaking, it’s no wonder that much of the applications and hardware are quite distinctive to this space with somewhat limited applicability elsewhere. In fact, I learned from one of the attendees from D-Wave Systems that there are only 3 of their Quantum computers in production at customer sites, and all of those customers also attended the conference.

Setting aside the hardware though, they seem to have made some great strides on the software side, leveraging DoE budgets to develop open source software, and even creating communities around some of those projects. But here’s where it gets a bit sticky, as was evidenced by Dan Stanzione’s presentation, “A University HPC Center Perspective on HPC and Cloud Providers.” He concluded that while HPC centers and Cloud Providers potentially share some similar use cases, for the most part, you wouldn’t use an HPC system to run your company’s email, nor would you use a cloud service provider’s HPC service if you truly required highly optimized, high-performance simulations like many of the attendees conduct on a daily basis.

A great example on the need to optimize (customize) software for the hardware to squeeze every last drop of performance was presented by Andrew Connolly from the University of Washington, “Surveying the Sky with LSST: Software as the Instrument of the Next Decade.” Over the first ten years of its lifetime, this new generation of telescopes will survey half of the sky in six optical colors, discovering 37 billion stars and galaxies and detecting about 10 million variable sources every night. The telescope will gather 15 Terabytes per night and will release over 12 Petabytes of queryable data annually.

So, what does all this have to do with object storage? Well, in addition to object storage as an economical back-end archive for all this simulation data that is being generated (as is the case at Argonne National Laboratory), my talk on whether object storage could actually replace parallel file systems for read-intensive HPC workloads (which is, in fact, what is happening in phase 4 of the JASMIN project with our customer Rutherford Appleton Laboratory—more on that at another time), seemed to resonate with much of the audience. And, it spawned some internal debate on whether there could be a reduced need for POSIX front-ends to back-end object stores. This is, of course, a debate that will play out over time and is yet another example of the tension between rewriting applications to take advantage of the latest hardware (or software for that matter) versus running more simulations and analysis with the existing software. Big trade-offs to think about…which was the entire point of the conference.

I’d like to extend my appreciation to the organizers of the Salishan Conference for the opportunity to speak and to learn about the challenges still facing the individuals and teams in this important industry, and invite you to contact us if you have questions about role object storage in high performance computing.

The post High Speed Computing on the Oregon Coast appeared first on Caringo.