The Marc Jacobs Utilization Meter has been pegged for at least two weeks now on a combination of client work, internal projects, recruiting, and writing (hence the appearance of my blog having fallen down a well.) It’s great to be busy, but I hate seeing the blog go stale.

In any event, I had an article published in GRIDtoday this morning entitled, “Grid in Financial Services: Past, Present, and Future”. Derrick Harris, the editor of GRIDtoday, reached out for an article after reading my multi-part series on “High Performance Computing: A Customer’s Perspective”. A big thanks to Derrick for giving me this opportunity.

Last week, Lab49 showed up in force at the Microsoft Financial Developers Conference. One of our founders and managing directors, Daniel Chait, trotted out some powerful Windows Presentation Foundation (WPF) data visualization demos for the financial services industry.

I got a chance to sneak in and out of several sessions, but overall the conference was ho-hum from a developer’s perspective. Not much new, not much technical. Not even much in the way of vendor swag. I mean, really, how many Microsoft-branded over-the-shoulder messenger bags can one person use? Yawn. Praise be for the vast aquifers of coffee and candy-coated apples.

I did learn at least one new thing, though: Platform Symphony now only supports distributing tasks as executables. According to a guy named Rene, a Platform Computing tech representative roaming the audience answering questions, though the product used to support distributing tasks as libraries, they’ve jettisoned that feature for simplicity and performance sake.

It seems rather archaic to me to have such limited choice in designing your distributed applications. The Digipede Network, for example, allows you to distribute executables, libraries, and even in-memory objects. Being locked into wrapping your logic into an executable (even when all you want to distribute is a function call) reminds me of the coding awkwardness of PVM and MPI.

Is Platform Computing old guard or just getting old?

My father has been in direct marketing for years now and in sales for even more years before that. He’s picked up a number of colorful phrases along the way (some so colorful that they can’t be repeated here), but he often shares this fine nugget:

When value exceeds price, a deal is made.

I’ve always found that to be a Zen koan-like phrase, especially given that neither value nor price are necessarily fixed, and you can manipulate either to make a deal. When you can’t budge the price, bump the value. And when real value isn’t elastic, you do as great salesmen do and create perceived value. An Ariel Atom, strictly speaking, is just a car, a mode of transportation of comparable value to a Dodge Neon. Four wheels, brakes, moderate gas mileage. So why does an Ariel Atom trade at a price at four times the price of a Neon? Because, for some, the self-image of open-wheeled racing and thrilling performance has a value. And if that value (plus the real value of owning a car itself minus the premium demanded to avoid destitution) exceeds invoice, somebody’s pulling out a checkbook.

Anyway, I was reading Larry O’Brien‘s post, “Map, Everything’s An Object, and Inline”, and I kept thinking about this relationship. The changes in modern hardware, the palpable lean of our tools toward parallelism — these factors are creating a considerable value proposition for pervasive concurrency. Multithreading is suddenly starting to look more “fat free” than “free fat”. Erlang is coming back from the dead. PlayStation 3 consoles are topping the teraflop charts at Folding@Home.

But when a coder introduces threading to solution, are they creating real value or perceived? Is it faster? Is is simpler? Is it better? Is it appropriate?

I love functional languages and their neat mapping into parallel constructs. I love the idea that a program may be paired someday with a virtual machine capable of wise and balanced decisions to parallelize or distribute based on an accurate assessment of real costs and benefits. But I wonder if, like Larry suggests, the intermediate solutions of compiler switches, pragma statements, and metadata attributes might just be nothing but enough rope to hang ourselves with.

One of the more interesting, semi-futurist ideas floated by the morning panel discussion at STREET#GRID 2007 yesterday was the idea that job schedulers would begin to use the hardware monitoring capabilities of modern blade computers to influence task assignments. Kevin Pleiter, IBM Emerging Business Solutions Executive for the Financial Services Sector of IBM, imagined a toolset that allowed job schedulers to take into account whether a particular blade or rack was running too hot, was disk-bound or drawing too much power, etc.

Read the rest of this entry »

From ACM Queue:

PeakStream Founder and CTO Matthew Papakipos explains how the PeakStream Virtual Machine provides automatic parallelization of programs written in C/C++ so that developers can focus on their application logic — and not the intricate details of parallelizing the application — and ultimately improve the performance of HPC applications when running on multi-core processors.

Matt is a friend of a friend, and I spoke with him several months ago about his company and their premier product, PeakStream Platform. PeakStream provides enabling technologies that allow developers to more quickly take advantage of general-purpose computation on graphical hardware (also known as GPGPU). Current NVIDIA and ATI graphics cards have incredible power to perform certain types of mathematically intensive calculations in a streaming, data-parallel fashion; however, taking advantage of that power requires some deep programmer sophistication. While efforts such as HLSL, Cg, CUDA, and CTM have made GPGPU programming more accessible, a cursory look at the documentation for these technologies proves that it still isn’t anywhere near easy.

Read the rest of this entry »