Microsoft CHESS

9 Nov 2009

I was talking with some folks from Microsoft Technical Computing Group last week, and they turned me on to Microsoft CHESS. Extremely cool.

Their description:

CHESS is a tool for finding and reproducing Heisenbugs in concurrent programs. CHESS repeatedly runs a concurrent test ensuring that every run takes a different interleaving. If an interleaving results in an error, CHESS can reproduce the interleaving for improved debugging. CHESS is available for both managed and native programs.

http://research.microsoft.com/en-us/projects/chess/

http://msdn.microsoft.com/en-us/devlabs/cc950526.aspx

http://blogs.msdn.com/chess/

I’ve just confirmed that I will be in Las Vegas for MIX09 this year. If any of you plan to be there as well, drop me a note.

At PDC2008

26 Oct 2008

To everyone who is attending the Microsoft Professional Developers Conference this year in LA, I’ll be there from tonight to Wednesday mid-day.  I’m having to fly back to NYC on Wednesday as Lab49 has been invited to participate in the Microsoft PhizzPop Design Challenge in New York which starts on Thursday.

If you’re around, feel free to say hi. I’ll be hawking some of the work we’ve done for Microsoft specifically for this year’s PDC, including a really cool WPF-based parallel portfolio tracking demo that uses F#, Parallel Extensions, and Microsoft HPC Server 2008 Cluster SOA and two Microsoft official whitepapers for the Microsoft Parallel Computing Initiative (including Taking Parallelism Mainstream, a  whitepaper you’ll find on your PDC hard-drive).

For PDC, I’ll be keeping an eye on Twitter. See you there!

About three weeks ago, I had the opportunity to sit down with Bill Bain of ScaleOut Software and the two Joes, Joe Cleaver and Joe Rubino, from Microsoft’s Financial Services Industry Evangelism team after I gave my presentation on distributed caches at Microsoft’s 6th Annual Financial Services Developer Conference. The two Joes recorded a podcast of our conversation.

Bill, Joe, and Joe, thanks for the opportunity to talk with you guys.

Dataflow is about creating a software architecture that models a problem on the functional relationship between variables rather than on the sequence of steps required to update those variables. It’s about shifting control of evaluation away from code you write toward code written by someone else. It’s about changing the timing of recalculation from recalculate now to recalculate when something has changed. Sure, it’s a distinction that may have more to do with emphasis and point of view than with paradigm, but it can be a liberating distinction for certain problems in financial modeling.

If you work in finance, chances are you may already be expert in today’s preeminent dataflow modeling language: Microsoft Excel. Excel is the undisputed workhorse of financial applications, taught in every business school, run on every desk, wired into the infrastructure of nearly every bank, fund, or exchange in existence. The reason for Excel’s singularity in the black hole of finance is its ability to emancipate modeling from code (and thus developers) and empower analysts and business types alike to create models as interactive documents. Make no mistake — writing workbooks is still very much software development. But Excel’s emphasis on data rather than code, relationships rather than instructions, is something that fits with the work this industry does and the people that do it.

Briefly, when you model in Excel, you specify a cell’s output by filling it with either a constant value or a function. Functions are written in a lightweight language that allows function arguments to be either constant values or references to another cell’s output. In the typical workbook, cells may reference cells that in turn reference other cells, and so on, resulting in an arbitrarily sophisticated model that can span multiple worksheets and workbooks. The point though is that, rather than specifying your model as a sequence of steps that get executed when you say go, here you describe your model’s core data relationships to Excel, and Excel figures out how and when it should be executed.

Example: An Equities Market Simulation

Let’s say that we are writing a simulation for an equities (stock) market. Such a simulation could be used for testing a trading strategy or studying economic scenarios. The market is comprised of many equities, and each equity has many properties, some that change slowly over time (such as ticker symbol or inception date), and some that change frequently (such as last price or volume). Some properties may be functions of other properties of the same equity (such as high, low, or closing price), while others may be functions of properties on other equities (such as with haircuts, derivatives, or baskets).

As a starting point, we introduce a simulation clock. Each time the clock advances, the price of all equities gets updated. To update prices, we use a random walk driven by initial conditions (such as initial price S0, drift r, and volatility σ), a normally distributed random variable z, and a recurrence equation over n intervals of t years: 

S_{n} = S_{n-1} \cdot \exp(r t - 0.5 \sigma^2 t + \mathbf{z} \sigma \sqrt{t} )

Note: This equation provides a lognormal random walk [1,2], which means that instead of getting the next price by adding small random price changes to the previous price, we’re multiplying small random percentages against the previous price. This makes sense for things like prices since a) they can’t be negative, and b) the size of any price changes is proportional to the magnitude of the current price. In other words, penny stocks tend to move up and down by fractions of a penny while stock trading at much higher prices tend to move up and down in dollars.

In Excel, you could model this market by plopping the value of the clock into a cell, setting up other cells to contain initial conditions, and then have a slew of other cells initialized with functions that reference the clock and initial conditions cells and that calculate a new price using the above equation for each virtual equity. And then hit F9.

But how would you write this in code? Would you just update the clock and then exhaustively recalculate all of the prices? If you had to incorporate equity derivatives or baskets, would your architecture break? How would you allow non-programming end-users to declaratively design their own simulation markets and the instruments within?

Recently, one of our financial services clients at Lab49 has been trying to solve a similar problem in .NET, and I had been suggesting to them that the problem is analogous to how Microsoft Windows Presentation Foundation (WPF) handles the flow of data from controller to model to view. Dependency properties, which form the basis of data binding in WPF applications, implement a dataflow model similar to Excel, and what I had in mind at first was a solution inspired by WPF. But the more I discussed this analogy with the client, the more I realized that we didn’t just have to use WPF as inspiration; we could actually use WPF.

In this series, I’ll dive further into creating the equities market simulation and look at how to use WPF data binding to create a dataflow implementation. Note that there are several considerations to this approach, and, under the category of just because you can doesn’t mean you should, we’ll evaluate whether or not this method has legs.

[to be continued]

It seems just yesterday that 10Mbit 10BASE-T Ethernet networks were the norm, and the workstation wonks I worked with years ago at US Navy CINPACFLT in Pearl Harbor, Hawaii jockeyed to have high-speed ATM fiber run to their offices. Sure, this was the age when dual bonded ISDN lines represented the state of the art in home Internet connectivity, but who really needed that much bandwidth? What did we have to transfer? Email? Usenet posts? Gopher pages?

Then, slowly over the course of the next 5-10 years, network vendors upgraded their parts at negligible marginal cost, and 100Mbit began to pervade enterprise networks of all sizes, democratizing fast file transfers and streaming multimedia. 100Mbit seemed pretty fast. The Next Big Thing, Gigabit Ethernet, seemed a pipe so big it couldn’t be saturated and certainly not worth the exorbitant prices the hardware was going for at the time.

This week, I’ve been helping out one of our Lab49 teams working at a big investment bank to run performance tests on a large distributed cache deployed a 96-node farm all connected on Infiniband. Interestingly, when we ran these brutal performance tests on a homegrown Gigabit blade setup with eight nodes, it was pretty trivial to saturate the network while the CPU idled along at about 3% utilization. Running them on the 96-node Infiniband system, though, the same tests pegged the CPUs while the network drank mint juleps on the veranda during the first warm day of spring.

The sad thing about this situation is that Gigabit is only now getting sufficient uptake in enterprise NOCs that it is worthwhile to upgrade the NICs out at the clients. The last mile, just like with 100Mbit, is taking a while. Despite the fact that Gigabit just hasn’t been with us that long, it’s pretty clear to me that, with snowballing of HPC, CEP, real-time messaging, and P2P network services (not to mention HD audio and video), Gigabit will be led out to pasture before it ever really gets a chance to race.

We may all still be using Gigabit at the desktop for years to come (or not, if 802.11n and offspring steal the show), but between Infiniband and all the activity we’ve been seeing now in 10Gbit (as evident during the SC07 conference in Reno, NV last November), Gigabit just isn’t “it” anymore. Just like pet rocks were in the late 70′s, Gigabit is a temporary salve for a social ill that ultimately requires a more vital solution. I may not yet know clearly what it is, but I know it ain’t Gigabit.

As mentioned in a previous post, I spent the two days last week at the 6th Annual Microsoft Financial Services Developer Conference, and I have to say that it was a great event.

On Wednesday, I gave my talk on distributed caches:

The room was packed, folks were asking great questions, and the feedback I got was very positive. For folks who are already knee-deep in high-performance computing and distributed caches, the presentation may not offer much not already known (except perhaps for the late sections on performance tests we ran in the lab and advanced techniques like object segmentation). But given that Microsoft had given this conference a clear emphasis on HPC and that many developers in attendance were relatively new to the subject, the presentation seemed to strike a fair balance between background and practice.

The funny thing is that when I set about writing this presentation, my first draft had over eighty slides for a sixty minute talk. Though I’m a fast talker, I’m not able to retire content-rich slides at a 45 second pace and still maintain an audience. Getting acquainted to distributed caches from a developer’s point of view involves a lot of content, but due to the constraints of the presentation, a lot had to end up on the cutting room floor. Ultimately, I trimmed the deck to fifty-one slides (including such dross as title slides and section headers), but I got through the entire deck with time allowed to take questions. In the minutes before my talk, I was concerned that the most interesting, most crunchy, most impressive material got lost during revision. But by the time the talk was over, I realized that was left after revision was what this audience really wanted to hear.

During the talk I took two straw polls. The first question was: how many people in the audience had practical exposure to distributed caches? I would that guess about 15% raised their hands. The second set of questions revolved around which distributed cache products people were familiar and/or had actually used. Interestingly, GigaSpaces was the runaway winner in both awareness and actual exposure, with about 10% raising their hands. ScaleOut StateServer was a strong second. GemStone GemFire, Oracle Coherence, and Alachisoft NCache clustered far behind. IBM ObjectGrid drew crickets.

Anyway, after my presentation on Wednesday, I caught up with friends and colleagues from our partner companies and sampled the “Lab49 Red”, a drink recipe served at the Wednesday cocktail reception sponsored by Lab49 in our capacity as Platinum Sponsor.

On Thursday, I sat in a couple sessions. The first session, hosted by Rich Ciapala from the Microsoft HPC Server 2008 team, demonstrated the Microsoft HPC++ CompFin Lab, a framework for computational finance that Lab49 has been building for Microsoft. It’s a great piece of work, and it deserves a blog post on its own. The second session I went to was Stephen Toub’s presentation on the Task Parallel Library, PLINQ, and related technologies from the Parallel Computing Group at Microsoft. Though I had seen most of the content before, I was taken aback by the 24-core (!) test machine they got from Intel to run their demos on. That’s four sockets, six cores per socket. The Task Manager Performance tab looked awesome.

The rest of the time I talked to a few reporters and grilled our partners – Digipede, Microsoft, Platform, ScaleOut, and others – for the inside skinny on what’s around the corner for their products.

Overall, I have to say that I really enjoyed this conference. This is my second year in attendance, and this year was much more interesting than last. There are few other conferences so focused on giving developers in financial services concrete and practical information on how to become more adept at using Microsoft technology to solve their particular brand of problems. Microsoft’s product teams were in heavy attendance, and the vendors present seemed both relevant and engaged.

I look forward to next year!

Follow

Get every new post delivered to your Inbox.