About three weeks ago, I had the opportunity to sit down with Bill Bain of ScaleOut Software and the two Joes, Joe Cleaver and Joe Rubino, from Microsoft’s Financial Services Industry Evangelism team after I gave my presentation on distributed caches at Microsoft’s 6th Annual Financial Services Developer Conference. The two Joes recorded a podcast of our conversation.

Bill, Joe, and Joe, thanks for the opportunity to talk with you guys.

Dataflow is about creating a software architecture that models a problem on the functional relationship between variables rather than on the sequence of steps required to update those variables. It’s about shifting control of evaluation away from code you write toward code written by someone else. It’s about changing the timing of recalculation from recalculate now to recalculate when something has changed. Sure, it’s a distinction that may have more to do with emphasis and point of view than with paradigm, but it can be a liberating distinction for certain problems in financial modeling.

If you work in finance, chances are you may already be expert in today’s preeminent dataflow modeling language: Microsoft Excel. Excel is the undisputed workhorse of financial applications, taught in every business school, run on every desk, wired into the infrastructure of nearly every bank, fund, or exchange in existence. The reason for Excel’s singularity in the black hole of finance is its ability to emancipate modeling from code (and thus developers) and empower analysts and business types alike to create models as interactive documents. Make no mistake — writing workbooks is still very much software development. But Excel’s emphasis on data rather than code, relationships rather than instructions, is something that fits with the work this industry does and the people that do it.

Briefly, when you model in Excel, you specify a cell’s output by filling it with either a constant value or a function. Functions are written in a lightweight language that allows function arguments to be either constant values or references to another cell’s output. In the typical workbook, cells may reference cells that in turn reference other cells, and so on, resulting in an arbitrarily sophisticated model that can span multiple worksheets and workbooks. The point though is that, rather than specifying your model as a sequence of steps that get executed when you say go, here you describe your model’s core data relationships to Excel, and Excel figures out how and when it should be executed.

Example: An Equities Market Simulation

Let’s say that we are writing a simulation for an equities (stock) market. Such a simulation could be used for testing a trading strategy or studying economic scenarios. The market is comprised of many equities, and each equity has many properties, some that change slowly over time (such as ticker symbol or inception date), and some that change frequently (such as last price or volume). Some properties may be functions of other properties of the same equity (such as high, low, or closing price), while others may be functions of properties on other equities (such as with haircuts, derivatives, or baskets).

As a starting point, we introduce a simulation clock. Each time the clock advances, the price of all equities gets updated. To update prices, we use a random walk driven by initial conditions (such as initial price S0, drift r, and volatility σ), a normally distributed random variable z, and a recurrence equation over n intervals of t years: 

S_{n} = S_{n-1} \cdot \exp(r t - 0.5 \sigma^2 t + \mathbf{z} \sigma \sqrt{t} )

Note: This equation provides a lognormal random walk [1,2], which means that instead of getting the next price by adding small random price changes to the previous price, we’re multiplying small random percentages against the previous price. This makes sense for things like prices since a) they can’t be negative, and b) the size of any price changes is proportional to the magnitude of the current price. In other words, penny stocks tend to move up and down by fractions of a penny while stock trading at much higher prices tend to move up and down in dollars.

In Excel, you could model this market by plopping the value of the clock into a cell, setting up other cells to contain initial conditions, and then have a slew of other cells initialized with functions that reference the clock and initial conditions cells and that calculate a new price using the above equation for each virtual equity. And then hit F9.

But how would you write this in code? Would you just update the clock and then exhaustively recalculate all of the prices? If you had to incorporate equity derivatives or baskets, would your architecture break? How would you allow non-programming end-users to declaratively design their own simulation markets and the instruments within?

Recently, one of our financial services clients at Lab49 has been trying to solve a similar problem in .NET, and I had been suggesting to them that the problem is analogous to how Microsoft Windows Presentation Foundation (WPF) handles the flow of data from controller to model to view. Dependency properties, which form the basis of data binding in WPF applications, implement a dataflow model similar to Excel, and what I had in mind at first was a solution inspired by WPF. But the more I discussed this analogy with the client, the more I realized that we didn’t just have to use WPF as inspiration; we could actually use WPF.

In this series, I’ll dive further into creating the equities market simulation and look at how to use WPF data binding to create a dataflow implementation. Note that there are several considerations to this approach, and, under the category of just because you can doesn’t mean you should, we’ll evaluate whether or not this method has legs.

[to be continued]

The Marc Jacobs Utilization Meter has been pegged for at least two weeks now on a combination of client work, internal projects, recruiting, and writing (hence the appearance of my blog having fallen down a well.) It’s great to be busy, but I hate seeing the blog go stale.

In any event, I had an article published in GRIDtoday this morning entitled, “Grid in Financial Services: Past, Present, and Future”. Derrick Harris, the editor of GRIDtoday, reached out for an article after reading my multi-part series on “High Performance Computing: A Customer’s Perspective”. A big thanks to Derrick for giving me this opportunity.

Excerpted from a paper I delivered on January 16, 2007 at the Microsoft High-Performance Computing in Financial Services event in New York.

In Closing

It’s a very exciting time to a proponent of high-performance computing in finance. Right now, it’s still a rather rugged task and evangelizing such rough solutions can sometimes result in sour impressions, but overall it’s getting easier to make it work all the time. With all the new products and vendors entering the market right now, I’m convinced we’ll scaling out with ease in the coming years. But in the meantime, we have to be vigilant in ensuring that vendors understand our business and developers and that they bring to market the tools and guidance that allow to keep prioritizing business first and technology second.

Excerpted from a paper I delivered on January 16, 2007 at the Microsoft High-Performance Computing in Financial Services event in New York.

Help Me Help You Help Me

To drive home the point, our trading and portfolio generation systems at Bridgewater have been parallelized and distributed for some time, based on a series of proprietary technologies that a) were not that great, b) lacked many features, and c) probably shouldn’t have been written in the first place. Along the way, we used DCOM, COM+, and .NET Remoting. We wrote custom job schedulers and custom deployment processes. We leveraged virtualization, disk imaging, multicast networks, message queues, even Microsoft Application Center. Each time, we managed to stack up the available Lego pieces and make a nice little tower out of it. But, typical of enterprise development projects that supply infrastructure rather than specific line-of-business value, they lacked for amenities. The APIs were never sufficiently developed or documented, the monitoring and administration tools often required black art skills, and the user interfaces, if present at all, bordered on sadistic.

Each time we revisited this situation, we knew that we shouldn’t be writing this stuff. We shouldn’t have had to. We knew that it wasn’t our core expertise and that we would never devote enough developer resources to give it professional polish. In reality, there are always just too many real projects to work on. The problem was that, until recently, there just weren’t any off-the-shelf packages for developing distributed applications on the Microsoft platform. For various reasons, we weren’t going to start invading our IT infrastructure with Linux just to use half-baked open-source solutions. So we rolled our own. Again. And again.

Then, a little over a year and a half ago, I read a news brief about Digipede Network in the now defunct Software Developer Magazine. It advertised a commercial grid computing solution built entirely on Microsoft .NET and running on Microsoft Windows. We downloaded the eval, read the APIs. After a brief meeting among lead engineers, we decided to do a test port, just a dip of the toe to see how much effort it would take to switch to a commercial solution.

Let me tell you. The whole port to Digipede, not just the acid test but the whole port, took one developer (me!) exactly three days from start to finish. After just an additional two weeks of procurement, deployment, and testing, we went live. And it has been working great.

That’s the kind of help we need. We need books and articles aimed at the broader market, not at the ACM or IEEE, that show just how easy it can be. We need our vendors to wrap up the hard stuff and leave the samples, tools, and guidance so that we can just markup our business logic and plug it in to a high-performance computing infrastructure in a week.

Introducing the Windows Execution Foundation

Microsoft .NET Framework 3.0, despite the unfortunate naming confusion, brings with it a tantalizing mix of technologies that are just waiting to be composed into high-performance computing framework for.NET. Building on the power of the Windows Communication Foundation and the Windows Workflow Foundation, we could solve the four big technical vacuums in financial high-performance computing:

  1. Job Deployment
  2. Job Security
  3. Pool Management
  4. Scalable I/O

Such a framework, let’s call it a Windows Execution Foundation, would have several features:

  1. Declarative Parallelism
  2. Distributed Concurrency and Coordination Constructs
  3. Distributed Shared Memory and Object Caches
  4. Lightweight File Swarming
  5. Lightweight Message Bus
  6. On-demand Pool Construction and Node Addressing

Armed with this kind of technology, the financial industry could focus on the business, not the technology. We could achieve high-performance computing without having to understand every relevant implementation detail. We could wrestle back control of our own scalability story from our IT departments and solve our scalability problems with software.


Excerpted from a paper I delivered on January 16, 2007 at the Microsoft High-Performance Computing in Financial Services event in New York.

Know Thy Developer

To make us customers and help us drive high-performance computing through our infrastructure, you have to understand that our engineers prioritize business first and technology second. It’s a mandate. The technology services the business goal, not the reverse. We attract and retain brilliant developer talent. We shower them with education and learning opportunities. At the end of the day, though, we are grooming them to be generalists, not specialists. We care more that they understand their menu of options and know how to choose a solution appropriate to the problem than they become expert on the inner workings of any one technology. If we demand any specialized knowledge at all, it’s usually of finance and economics, not distributed computing, algorithms, program analysis, or detailed performance optimization. I know that they can learn it, but in practice they can’t be their emphasis. Instead, they need to use that mindshare to ensure that we’re doing the right by our customers every step of the way.

Yet, this emphasis gives way to rather pathological situations. Last week I led a code review of an important class in our data infrastructure. It was about 300 lines long, and it was written by one of our most senior and productive engineers. The class takes in a large matrix of Excel-like data extraction and manipulation formulas, evaluates each formula, and passes back the same matrix with each formula overwritten by its result. It’s used widely throughout the company by both end-users and automated processes.

In his attempt to improve the performance of this class on multiprocessor hosts, the engineer decided to parallelize evaluation of the formulas. He constructed a list of threads, appended onto it one thread per matrix element, and then used a semaphore to ensure that only n threads were running at a time. In other words, if you passed in a matrix with 10,000 cells, he’d create 10,000 threads, only eight of which (by default) would be runnable. He had locks in all the right places, test cases, a strong public interface, and copious comments. It even worked. But his poorly thought-out design would bring a server class system to its knees in seconds, and he didn’t know why. After I showed him how to rewrite it in a more conventional producer/consumer pattern with a fixed number of threads, calls to this class which used to take ten minutes were now taking less than ten seconds apiece.

Now this guy is smart. He’s a great coder. He is excellent at picking appropriate technology for a given problem. In fact, he even designed and implemented most of the data infrastructure this class was part of. But when it comes to threading, he’s a rank beginner. He just didn’t know of a better way to do it.

That is the paradox of our engineers. They’re wicked smart. They are as capable as anyone of pulling off a MacGyver with duct tape and bailing wire and making a workable system of non-integrated pieces. But our developers need frameworks, patterns, and comprehensive tools for parallelizing and distributing their business logic. Without them, they’ll start making it up on their own. We all know that it’s not where they should be placing our efforts. With all the other things they need to do, they won’t do it with excellence, and they won’t think through all the things they don’t know.


Excerpted from a paper I delivered on January 16, 2007 at the Microsoft High-Performance Computing in Financial Services event in New York.

What’s Hard, What Isn’t

Part of the problem is that there is an unclear separation between what is hard and what isn’t, and the information out there isn’t helping at all much. Implementing MPI or distributed synchronization objects or job scheduling algorithms is reasonably hard and should probably be left to experts. But naively distributing a command-line executable or a method on a serializable class is cake.

Currently, it’s too easy to get distracted by the watchworks behind a high-performance computing solution. Just like you don’t need to actually solder a north bridge or layout a PCB motherboard to write a blog entry, you don’t need to drive yourself mad generating different portfolios on different machines. What you need is guidance, guidance to know which problems are proudly parallel, which units of parallelization are appropriate, which data should be shared versus copied, which configurations create bottlenecks, how to get your code and data out to the compute nodes, and when to do things simple and when to roll up your sleeves and get dirty. In many ways, distributed computing is simpler than multithreading since you’ve got better insulation between your processes and have to be more explicit about moving state around.

At Bridgewater, we’ve actually created a number of intern projects lately out of distributing existing .NET systems using a product called the Digipede Network from Digipede Technologies. I wouldn’t trust a single one of these interns to write high-quality multithreaded code or design caching strategies or implement distributed matrix multiplication using MPI, but on the other hand they’re able to roll out incredible distributed applications that work great with less than a week’s exposure to Digipede. They can do this because the problems are well-suited to the solution, the product is appropriately targeted to our company’s platform, and the appropriate samples, tools, and guidance exist to get the job done.

A Horrible Clang

If anything, that’s the reason why high-performance computing has such a bad rap. Until recently, high-performance computing meant Unix and clusters and fancy interconnects, all allied with a masochistic appreciation for open-source, thesis projects, and outdated PostScript documentation. The samples, tools, and guidance we’ve inherited have not been aimed at us, the predominant enterprise developer on Windows shifting wholesale from COM to .NET, VB to C#. MPI and OpenMP don’t target this audience. They target the hardcore C++ set. As much as I personally love C++ (almost as much as Python), it’s anti-productive for me to introduce it to my organization just to take advantage of vectorizing compilers, OpenMP, and MPI. I’d sooner settle for NullReferenceExceptions and reference semantics than GPFs and copy constructors any day of the week. Products like Digipede are a step in the right direction, but overall the message of high-performance computing on Windows is muddled, aimed at a narrow market that may not be interested.

At Ellington, my previous firm that specialized in mortgage-backed securities, we had a 256-node high-performance cluster built on Linux, GCC, and MPI. It would be a plum for Compute Cluster Server. Yet, I can’t think of one reason why I, as a principal software engineer, could recommend it to them. There is no point. They have too much invested in their current infrastructure, and there aren’t enough clear-cut savings and advantages that might warrant the costs and resultant risks. Perhaps if they were starting fresh or integrating some third-party analytics packages that only offered support for C++ on Windows, it would make sense. But that isn’t the case. Target their nascent .NET trading desk analytics with something other than MPI, though, and maybe you’ve got a customer.

The Excel Services for SharePoint 2007 story is another mixed bag from a high-performance computing perspective. It’s a fabulous product from the perspective of centrally sharing and managing workbooks at the enterprise level. I can guarantee that it will play a major part of Bridgewater’s Excel-heavy infrastructure. However, from the perspective of integrating your quantitative analysts into your engineering process, it’s a miss.

Another Microsoft product, Microsoft Expression Blend (formerly Expression Interactive Designer or “Sparkle”), demonstrates a great way to directly integrate non-engineering contributors (such as illustrators and UI designers) into the Microsoft Visual Studio 2005 development process. The project artifacts they create are full-fledged members of the solution. Engineers and illustrators work in parallel, and the solution is constantly updated.

We need the analog for our analysts. They need to work in their development environment of choice, Microsoft Excel, and we need to have their work immediately accessible as compiled libraries. The UI is irrelevant; it the math and models we want. The UI is just the vehicle for our analysts to develop and test their methods. I don’t want my engineers shoehorning distributed computing code into an analyst’s spreadsheet. I want analysts to compile their spreadsheets and have our engineers reference them as class libraries that can be inserted into our broader high-performance computing infrastructure.

Call it Microsoft Visual Excel.

Imagine if an analyst could declare one or more worksheets as a class, highlight particular cells as class properties, function inputs and return values, with method bodies filled in by Excel worksheet functions and calls to methods or macros written in the .NET programming language of your choice. Imagine a PowerShell-like metaphor, where everything is an object. And now imagine that you can compile the whole thing into an assembly that can be directly referenced by a .NET project. That would be a better building-block for our high-performance computing applications than Excel Services, as it better addresses our engineers and our engineering process.