High-Performance Computing in Finance: A Customer’s Perspective [3/7]
13 Jun 2007
Excerpted from a paper I delivered on January 16, 2007 at the Microsoft High-Performance Computing in Financial Services event in New York.
High-Performance Rocket Science
So why then am I not here telling you today that high-performance computing is as common as regular expressions at my company or that we’ve got a roadmap to parallelize everything that would benefit from it? Why are we here today trying to persuade you to adopt a high-performance computing solution rather than you telling us you’re already using one? If there is truly that much parallelism potential in the financial industry, why aren’t we already parallel?
We could talk about costs and ROI and the IT headaches involved, but I really don’t think that’s it. Architectural changes either bubble up from the engineers or trickle down from management, and I believe that neither right now is pulling the trigger nor is sure that they can. There is fear. There is uncertainty. There is doubt.
A couple years ago, when processor clock speeds and on-die caches were still growing all the time, all we had to do to get better performance was buy a new machine and redeploy our application. Most of the time, we didn’t even have to recompile. But, as Herb Sutter said, “the free lunch is over.” To get performance improvements, we now have to use concurrency, and concurrency requires engineering effort and introduces risk. On the face of it, despite various reports from vendors and pundits, high-performance computing seems hard. Really hard. But no one is totally sure.
I think that there is an inaccurate perception that high-performance computing is harder than it actually is. It has a reputation a lot like rocket science. When we say something is hard, “but it’s not rocket science”, we’re implying that rocket science is at the apex of hard. It conjures the image of extraordinary talent, technique, knowledge, know-how, raw genius, all brought to bear, in the face of great risk and pressure, on a task so awesome that it seemed before just a dream. It is a battle between adversity and science. It is Don Quixote in a lab coat. I think that’s the way high-performance computing comes across.
All of us here today hope to persuade you that high-performance computing is now easier and more attainable than ever. And, really, it is. Nonetheless, I am certain that you will leave today with some small trace of disgust and disappointment that it is still harder and more confusing to create a high-performance computing solution than you had hoped. Perhaps the tools target the wrong programming language or platform. Perhaps a solution will require you to upgrade your hardware or install a new operating system. Perhaps it requires a slew of different products and a bunch a duct tape and interoperability artifice just to get “hello, world” working. Regardless of the reason, it’s easy to get dismayed.
Our dream, after all, isn’t to put a man on the moon. We just want instantaneous results. Super-linear scalability. Orders of magnitude performance increases just by configuration, recompilation, or redeployment of our applications to new hardware. We want to spend less, get more, and feel more confident that our systems will shoulder ever more burgeoning loads without buckling under the weight. And we want easy. We want it so easy that we don’t have to worry, as often as we do, about the myriad ways we could be getting it wrong.
Today, we can buy processors with more cores, motherboards with more processors, racks of more and more servers all connected by faster flavors of Ethernet, for less money than a one-year service contract on Cray 90 supercomputer fifteen years ago. Yet we do not realize this dream. For most, a recompile of line-of-business applications won’t exact any performance gains, and most engineering managers aren’t sure they have on staff the parallel and distributing computing expertise to make hardware to break a sweat. They worry that they will make our systems more complex and more expensive without making them faster. They worry that the existing products, tools, and samples will require too much adaptation, too much interpretation, too much experimentation to be either useful or cost-effective.
Too often, they’re right.