About three weeks ago, I had the opportunity to sit down with Bill Bain of ScaleOut Software and the two Joes, Joe Cleaver and Joe Rubino, from Microsoft’s Financial Services Industry Evangelism team after I gave my presentation on distributed caches at Microsoft’s 6th Annual Financial Services Developer Conference. The two Joes recorded a podcast of our conversation.

Bill, Joe, and Joe, thanks for the opportunity to talk with you guys.

Advertisements

Dataflow is about creating a software architecture that models a problem on the functional relationship between variables rather than on the sequence of steps required to update those variables. It’s about shifting control of evaluation away from code you write toward code written by someone else. It’s about changing the timing of recalculation from recalculate now to recalculate when something has changed. Sure, it’s a distinction that may have more to do with emphasis and point of view than with paradigm, but it can be a liberating distinction for certain problems in financial modeling.

If you work in finance, chances are you may already be expert in today’s preeminent dataflow modeling language: Microsoft Excel. Excel is the undisputed workhorse of financial applications, taught in every business school, run on every desk, wired into the infrastructure of nearly every bank, fund, or exchange in existence. The reason for Excel’s singularity in the black hole of finance is its ability to emancipate modeling from code (and thus developers) and empower analysts and business types alike to create models as interactive documents. Make no mistake — writing workbooks is still very much software development. But Excel’s emphasis on data rather than code, relationships rather than instructions, is something that fits with the work this industry does and the people that do it.

Briefly, when you model in Excel, you specify a cell’s output by filling it with either a constant value or a function. Functions are written in a lightweight language that allows function arguments to be either constant values or references to another cell’s output. In the typical workbook, cells may reference cells that in turn reference other cells, and so on, resulting in an arbitrarily sophisticated model that can span multiple worksheets and workbooks. The point though is that, rather than specifying your model as a sequence of steps that get executed when you say go, here you describe your model’s core data relationships to Excel, and Excel figures out how and when it should be executed.

Example: An Equities Market Simulation

Let’s say that we are writing a simulation for an equities (stock) market. Such a simulation could be used for testing a trading strategy or studying economic scenarios. The market is comprised of many equities, and each equity has many properties, some that change slowly over time (such as ticker symbol or inception date), and some that change frequently (such as last price or volume). Some properties may be functions of other properties of the same equity (such as high, low, or closing price), while others may be functions of properties on other equities (such as with haircuts, derivatives, or baskets).

As a starting point, we introduce a simulation clock. Each time the clock advances, the price of all equities gets updated. To update prices, we use a random walk driven by initial conditions (such as initial price S0, drift r, and volatility σ), a normally distributed random variable z, and a recurrence equation over n intervals of t years: 

S_{n} = S_{n-1} \cdot \exp(r t - 0.5 \sigma^2 t + \mathbf{z} \sigma \sqrt{t} )

Note: This equation provides a lognormal random walk [1,2], which means that instead of getting the next price by adding small random price changes to the previous price, we’re multiplying small random percentages against the previous price. This makes sense for things like prices since a) they can’t be negative, and b) the size of any price changes is proportional to the magnitude of the current price. In other words, penny stocks tend to move up and down by fractions of a penny while stock trading at much higher prices tend to move up and down in dollars.

In Excel, you could model this market by plopping the value of the clock into a cell, setting up other cells to contain initial conditions, and then have a slew of other cells initialized with functions that reference the clock and initial conditions cells and that calculate a new price using the above equation for each virtual equity. And then hit F9.

But how would you write this in code? Would you just update the clock and then exhaustively recalculate all of the prices? If you had to incorporate equity derivatives or baskets, would your architecture break? How would you allow non-programming end-users to declaratively design their own simulation markets and the instruments within?

Recently, one of our financial services clients at Lab49 has been trying to solve a similar problem in .NET, and I had been suggesting to them that the problem is analogous to how Microsoft Windows Presentation Foundation (WPF) handles the flow of data from controller to model to view. Dependency properties, which form the basis of data binding in WPF applications, implement a dataflow model similar to Excel, and what I had in mind at first was a solution inspired by WPF. But the more I discussed this analogy with the client, the more I realized that we didn’t just have to use WPF as inspiration; we could actually use WPF.

In this series, I’ll dive further into creating the equities market simulation and look at how to use WPF data binding to create a dataflow implementation. Note that there are several considerations to this approach, and, under the category of just because you can doesn’t mean you should, we’ll evaluate whether or not this method has legs.

[to be continued]

It seems just yesterday that 10Mbit 10BASE-T Ethernet networks were the norm, and the workstation wonks I worked with years ago at US Navy CINPACFLT in Pearl Harbor, Hawaii jockeyed to have high-speed ATM fiber run to their offices. Sure, this was the age when dual bonded ISDN lines represented the state of the art in home Internet connectivity, but who really needed that much bandwidth? What did we have to transfer? Email? Usenet posts? Gopher pages?

Then, slowly over the course of the next 5-10 years, network vendors upgraded their parts at negligible marginal cost, and 100Mbit began to pervade enterprise networks of all sizes, democratizing fast file transfers and streaming multimedia. 100Mbit seemed pretty fast. The Next Big Thing, Gigabit Ethernet, seemed a pipe so big it couldn’t be saturated and certainly not worth the exorbitant prices the hardware was going for at the time.

This week, I’ve been helping out one of our Lab49 teams working at a big investment bank to run performance tests on a large distributed cache deployed a 96-node farm all connected on Infiniband. Interestingly, when we ran these brutal performance tests on a homegrown Gigabit blade setup with eight nodes, it was pretty trivial to saturate the network while the CPU idled along at about 3% utilization. Running them on the 96-node Infiniband system, though, the same tests pegged the CPUs while the network drank mint juleps on the veranda during the first warm day of spring.

The sad thing about this situation is that Gigabit is only now getting sufficient uptake in enterprise NOCs that it is worthwhile to upgrade the NICs out at the clients. The last mile, just like with 100Mbit, is taking a while. Despite the fact that Gigabit just hasn’t been with us that long, it’s pretty clear to me that, with snowballing of HPC, CEP, real-time messaging, and P2P network services (not to mention HD audio and video), Gigabit will be led out to pasture before it ever really gets a chance to race.

We may all still be using Gigabit at the desktop for years to come (or not, if 802.11n and offspring steal the show), but between Infiniband and all the activity we’ve been seeing now in 10Gbit (as evident during the SC07 conference in Reno, NV last November), Gigabit just isn’t “it” anymore. Just like pet rocks were in the late 70’s, Gigabit is a temporary salve for a social ill that ultimately requires a more vital solution. I may not yet know clearly what it is, but I know it ain’t Gigabit.

The Marc Jacobs Utilization Meter has been pegged for at least two weeks now on a combination of client work, internal projects, recruiting, and writing (hence the appearance of my blog having fallen down a well.) It’s great to be busy, but I hate seeing the blog go stale.

In any event, I had an article published in GRIDtoday this morning entitled, “Grid in Financial Services: Past, Present, and Future”. Derrick Harris, the editor of GRIDtoday, reached out for an article after reading my multi-part series on “High Performance Computing: A Customer’s Perspective”. A big thanks to Derrick for giving me this opportunity.

Over the past few months at Lab49, we’ve thrown ourselves into complex event processing (CEP) — aka event stream processing (ESP) — and have been formulating exactly how and when it fits into the larger, more comprehensive technology stack found in global financial services institutions. We’ve formed a number of interesting vendor partnerships, attended product training, sampled, compared, and teased apart many of the popular products, and we’ve created several CEP-based demo applications that have been shown at recent events like SIFMA.

Along the way, we’ve all learned a lot about CEP, and the more I learn, the more I dig it. The more I put CEP into practice, the more I foresee its ultimate dominance as an architectural design pattern for everyday development.

What’s fascinating to me about CEP isn’t that it’s a new idea, despite how it may be touted by vendors. Regardless of the hype, CEP isn’t the most revolutionary technology you’ve never heard of. What’s fascinating is that out from a decades-old, primordial soup of ideas, research, and trial-and-error that, in and beyond academia, has been trying to create architectural models around complex data problems with real-time constraints, enough best practices and design patterns have emerged to evolve an ecosystem of market entrants, seemingly all at once.

It’s not the first time that a bundle of quality design patterns took concrete form as a technology. Object pooling, lifetime management, transaction enlistment, and crash domains begat COM+, Microsoft Component Services, and J2EE application servers. Logging levels, external configuration, adaptable logging sinks begat log4j, syslogd, and the Logging Application Block from the Microsoft Enterprise Library. Unit-testing and test-driven development begat JUnit and its children.

These transformations have been crucial. Once developers accepted these patterns and solutions as sufficiently solved and commoditized, they were saved considerable time and attention. Freed of coding logging libraries and unit testing frameworks for the umpteenth time, developers could focus more on the business problem being solved rather than the infrastructure details required to solve it.

But these transformations didn’t really upset the gross architecture of applications. They may have changed some of the design decisions and simplified the implementations, but they didn’t fundamentally change the abstraction you would use to model a problem and architect a solution.

CEP, on the other hand, does.

Instead of storing and indexing miles of cumulative data in a persistent store to service complex queries in batch/polling fashion, CEP inverts the whole shebang, storing and indexing the complex queries before streaming data across queries without storing a lick. The transformation of a business problem from tables, rows, and polling intervals into events, filters, triggers, and real-time reactions is not only quite enabling, it changes the very way you think about how business problems can be solved and which problems may have viable solutions.

Over the next few weeks, I’ll delve a bit more into CEP and how it relates to technologies you might be more familiar with. In the meantime, check out some of the in-depth blog entries other folks from Lab49 have been writing about CEP.

Steve Tally of Purdue University has written a wonderful overview of the critical issue facing high-performance computing today: performance gains are no longer tied to transistor counts but to new concurrent hardware architectures, and the programmer base is lagging behind the necessary skills to drive those architectures.

The fastest of the fastest computers — supercomputers used at national research centers, research universities and major corporations — will soon gain even more performance by taking advantage of multicore computing.

Despite the promise of almost unimagined computing power, however, even computing experts wonder whether this time the hardware developers have raced too far ahead of many programmers’ ability to create software.

From ACM TechNews.

For years now, the Tastes Great, Less Filling argument pervading the high-performance computing debate has been Scaling Up, Scaling Out. I’m not sure who penned it originally, but I remember first hearing it back in 1999 while Microsoft was trying to position Windows as a compelling enterprise server platform in the face of multi-way Big Iron from Sun, HP, and others, especially during a period where dual processor Intel configurations were exorbitantly expensive. The Scaling Out story warned of the high costs, low ROI of Scaling Up and beckoned with perfectly elastic scalability. Scaling Up promised better performance potential for particularly ravenous applications and a less ungainly programming and administration model.

Even though, just like that old Miller Lite commercial, the argument ultimately had much more to do with politics and economics than it did with high-performance computing, it still continues to have legs. Over the last five years, just as Scaling Out had seemingly all but usurped the spotlight, the silent multicore revolution has helped Scaling Up elbow its way back to the stage. Scaled-out grids are secretly scaling back up in the natural course of IT departments undergoing periodic server upgrades. The current pricing structure of server hardware is compelling enterprises to specify 64-bit multi-core systems, even though a majority of enterprises have only 32-bit serial applications to run on them. Enterprise grids are getting both bigger and beefier. And though the argument between Scaling Up and Scaling Out lives on and remains an amusing debate for well-heeled parallelogians and gridmakers, the real issues affecting high-performance scalability are seeping out elsewhere.

Earlier this week, I was speaking with Jeff Wierer, Senior Product Manager on the Microsoft High-Performance Computing. He had recently been in London for a Compute Cluster Server User Group Meeting, and the scalability issue on everyone’s mind was physical infrastructure. Specifically, electrical power.

As many of you may already know, London is moving up fast as a world financial capital. Lab49‘s London office has been rained upon with fascinating projects from key financial services customers demanding algorithmic trading, computational finance, real-time data visualization, and high-performance computing. London’s financial institutions have been bulking up their computing horsepower and looking ahead to rich times. Unfortunatley, there’s one little snag.

Jeff told me that some of his customers in London have heard anecdotal reports that the National Grid in the UK is either unable or unwilling to provide the additional physical infrastructure required to support the concentration of power demand coming from the burgeoning financial center in the city. Short of ripping out and starting over, the National Grid may be stuck advising its customers to be happy with the power they got (despite ominous blackouts).

While I haven’t been able to find independent confirmation of this yet, it seems entirely possible given the recent power shortages of East Coast, West Coast and Europe at large, as well as the privatization of European electrical distribution that limited the amount of capital investment available to upgrade power infrastructure. (There are no bigger pockets than the pockets of Big Government…)

In the face of massive metropolitan and regional power crises, the argument between Scaling Up or Scaling Out is irrelevant (and rather pedantic, in my opinion). The real issue is between Scaling In and Scaling Abroad. And like Tastes Great, Less Filling, it really isn’t a debate at all because both are ultimately necessary ingredients of truly scalable architectures.

Scaling In is about putting hardware on a power diet. Let’s just assume that our cores are running at 100%, 100% of the time. Fun power management ideas like Intel SpeedStep, hard drive spindown, or monitor sleeping that reduce power consumption for workstations and laptops aren’t going to bail out institutional data centers. That will require new, more efficient processors and cooling systems, better rack and blade designs, distributed flash-based memories, and a slew of as yet unimagined inventions that software engineers can’t help out with even if they wanted to.

Scaling Abroad is about scaling out to multiple geographical locations. This is commonly a notion attributed to high-availability, business continuity, and disaster recovery, but it’s clearly also a scalability issue now, particularly at the scale of the largest grids (for example, Amazon, Google, and Microsoft). If Google housed all of its computing power in Mountain View, the Governator would come down and force Larry Page and Sergey Brin spelunk through Baja to search for the blown fuse that fritzed California.

At one point during my tenure at Ellington Management Group, air-conditioning and server airflow were the biggest obstacles to growing our cluster. As our server density grew from workstations to rackmount servers, from single processors machines to dual-proc with Hyper-Threading, our server room became a sauna. At first, we bought a bunch of digital thermometers, stuck them around the server room with double-sided type, and made rounds periodically to check that there were no dangerous hot-spots. After we missed a couple rounds over a weekend and carbonized some some silicon, we hooked up SNMP monitors to the built-in motherboard and case thermometers and bought a slew of fans and portable air conditioning units. We ultimately did the right thing and upgraded the HVAC, but not without having to do reconstructive surgery on our office space since our office had neither adequate amperage or ducting/venting to handle an appropriately sized cooler.

Seemed like a tough problem at the time.

Seems quaint now.