Microsoft CHESS

9 Nov 2009

I was talking with some folks from Microsoft Technical Computing Group last week, and they turned me on to Microsoft CHESS. Extremely cool.

Their description:

CHESS is a tool for finding and reproducing Heisenbugs in concurrent programs. CHESS repeatedly runs a concurrent test ensuring that every run takes a different interleaving. If an interleaving results in an error, CHESS can reproduce the interleaving for improved debugging. CHESS is available for both managed and native programs.

http://research.microsoft.com/en-us/projects/chess/

http://msdn.microsoft.com/en-us/devlabs/cc950526.aspx

http://blogs.msdn.com/chess/

I’ve just confirmed that I will be in Las Vegas for MIX09 this year. If any of you plan to be there as well, drop me a note.

At PDC2008

26 Oct 2008

To everyone who is attending the Microsoft Professional Developers Conference this year in LA, I’ll be there from tonight to Wednesday mid-day.  I’m having to fly back to NYC on Wednesday as Lab49 has been invited to participate in the Microsoft PhizzPop Design Challenge in New York which starts on Thursday.

If you’re around, feel free to say hi. I’ll be hawking some of the work we’ve done for Microsoft specifically for this year’s PDC, including a really cool WPF-based parallel portfolio tracking demo that uses F#, Parallel Extensions, and Microsoft HPC Server 2008 Cluster SOA and two Microsoft official whitepapers for the Microsoft Parallel Computing Initiative (including Taking Parallelism Mainstream, a  whitepaper you’ll find on your PDC hard-drive).

For PDC, I’ll be keeping an eye on Twitter. See you there!

As mentioned in a previous post, I spent the two days last week at the 6th Annual Microsoft Financial Services Developer Conference, and I have to say that it was a great event.

On Wednesday, I gave my talk on distributed caches:

The room was packed, folks were asking great questions, and the feedback I got was very positive. For folks who are already knee-deep in high-performance computing and distributed caches, the presentation may not offer much not already known (except perhaps for the late sections on performance tests we ran in the lab and advanced techniques like object segmentation). But given that Microsoft had given this conference a clear emphasis on HPC and that many developers in attendance were relatively new to the subject, the presentation seemed to strike a fair balance between background and practice.

The funny thing is that when I set about writing this presentation, my first draft had over eighty slides for a sixty minute talk. Though I’m a fast talker, I’m not able to retire content-rich slides at a 45 second pace and still maintain an audience. Getting acquainted to distributed caches from a developer’s point of view involves a lot of content, but due to the constraints of the presentation, a lot had to end up on the cutting room floor. Ultimately, I trimmed the deck to fifty-one slides (including such dross as title slides and section headers), but I got through the entire deck with time allowed to take questions. In the minutes before my talk, I was concerned that the most interesting, most crunchy, most impressive material got lost during revision. But by the time the talk was over, I realized that was left after revision was what this audience really wanted to hear.

During the talk I took two straw polls. The first question was: how many people in the audience had practical exposure to distributed caches? I would that guess about 15% raised their hands. The second set of questions revolved around which distributed cache products people were familiar and/or had actually used. Interestingly, GigaSpaces was the runaway winner in both awareness and actual exposure, with about 10% raising their hands. ScaleOut StateServer was a strong second. GemStone GemFire, Oracle Coherence, and Alachisoft NCache clustered far behind. IBM ObjectGrid drew crickets.

Anyway, after my presentation on Wednesday, I caught up with friends and colleagues from our partner companies and sampled the “Lab49 Red”, a drink recipe served at the Wednesday cocktail reception sponsored by Lab49 in our capacity as Platinum Sponsor.

On Thursday, I sat in a couple sessions. The first session, hosted by Rich Ciapala from the Microsoft HPC Server 2008 team, demonstrated the Microsoft HPC++ CompFin Lab, a framework for computational finance that Lab49 has been building for Microsoft. It’s a great piece of work, and it deserves a blog post on its own. The second session I went to was Stephen Toub’s presentation on the Task Parallel Library, PLINQ, and related technologies from the Parallel Computing Group at Microsoft. Though I had seen most of the content before, I was taken aback by the 24-core (!) test machine they got from Intel to run their demos on. That’s four sockets, six cores per socket. The Task Manager Performance tab looked awesome.

The rest of the time I talked to a few reporters and grilled our partners – Digipede, Microsoft, Platform, ScaleOut, and others – for the inside skinny on what’s around the corner for their products.

Overall, I have to say that I really enjoyed this conference. This is my second year in attendance, and this year was much more interesting than last. There are few other conferences so focused on giving developers in financial services concrete and practical information on how to become more adept at using Microsoft technology to solve their particular brand of problems. Microsoft’s product teams were in heavy attendance, and the vendors present seemed both relevant and engaged.

I look forward to next year!

Just wanted to give a heads-up that I’ll be speaking at the 6th Annual Microsoft Financial Services Developer Conference next week in New York. Here’s the abstract:

Distributed Caches: A Developer’s Guide to Unleashing Your Data in High-Performance Applications

With the advent of high-performance computing, application developers have begun to deliver computation on a massive scale by distributing it across multiple processors and machines. While distributed computing has given performance-critical applications more processor headroom, it has also shifted attention to previously latent bottlenecks: the storage, replication, and transmission of application data. The scalable power of compute grids is ultimately bound to the data grids that feed them. Without data caching and dissemination tools like distributed caches, it can be difficult to fully leverage a high-performance computing solution like Microsoft Windows HPC Server 2008 in data-intensive financial services applications.

In this talk, Lab49 will present a developer-centric guide to distributed caches, including what services they provide, how they function, how they affect application performance, and how they can be integrated into an application architecture. Lab49 will present test data showing the impact of several different caching strategies (such as compression and data segmentation) on distributed cache performance and will also demonstrate a methodology for testing distributed cache functionality. Lab49 will also discuss other application considerations as well, such as object naming conventions, notifications, and read-through/write-through to an underlying persistent data store.

Lab49 will also showing off some cool work we’ve done for Microsoft Windows HPC Server 2008 product team in the area of computational finance. Ssssh.

Lastly, we’ll be announcing the winner of the Lab49 WPF in Finance Innovation Contest. Somebody is going to take home some really cool prizes. Note that if you have been working on your submission (or you are still toying with the idea), the deadline for submissions has been extended to March 10, 2008. Just enough time to add some extra XAML bling.

Look forward to seeing you there!

This morning I received a gentle but welcome nudge from fellow concurrency author Michael Suess in the form of a flattering pingback from his blog, Thinking Parallel. I had actually stumbled on and subscribed to his blog back in January 2007 when I first began thinking of writing a book. I’ve been a avid reader ever since. As Michael points out, it’s a relatively small community of blogs dealing with concurrency and parallelism from a software developer’s point of view.

His pingback subtly reminded me that I have committed one of few cardinal sins of blogging: don’t develop an audience and then suddenly go dark. Excusably or not, life outside the blogosphere simply took a front seat this past month. Between having an LCD failure on my portable writer’s garrett (an IBM Lenovo X60 Tablet) that took two and a half weeks to resolve and then relocating back to Manhattan after four years of exile in Connecticut, the blog suddently became like a tollbooth attendant with a new haircut — it simply survived unnoticed.

As Mark Twain said, “The reports of my death are greatly exaggerated.” The book is moving along, Lab49 is humming, and my passion for bringing distributed computing to the masses remains undaunted. Stay tuned until next week after I’ve settled into my new digs in the East Village/Lower East Side. In the meantime, check out all the blogs on Michael’s post. I’m a subscriber to them all.

Well, not the Microsoft bashing one.