STREET#GRID 2007: I’m Sorry, I Have a Headache

17 Apr 2007

One of the more interesting, semi-futurist ideas floated by the morning panel discussion at STREET#GRID 2007 yesterday was the idea that job schedulers would begin to use the hardware monitoring capabilities of modern blade computers to influence task assignments. Kevin Pleiter, IBM Emerging Business Solutions Executive for the Financial Services Sector of IBM, imagined a toolset that allowed job schedulers to take into account whether a particular blade or rack was running too hot, was disk-bound or drawing too much power, etc.

While today you can get to a lot of interesting system information through technologies like SNMP and WMI, without coupling this information with more accurate models of how your distributed applications use compute resources such as CPU, bandwidth, and disk, it’s nearly impossible for job schedulers to make better decisions about task assignments. For example, if you have a task that is CPU-only, what does it matter if the target resource is disk-bound? What if your task pounds the network interface for its first five seconds but is quiet after that?

Creating accurate models of task resource usage is so far well outside the capability of any distributed computing product on the market. It would be nice, though.

Advertisements

3 Responses to “STREET#GRID 2007: I’m Sorry, I Have a Headache”

  1. Daniel Chait Says:

    Marc –

    Creating accurate models of task resource usage is so far well outside the capability of any distributed computing product on the market.

    True – for the general case – and that’s a problem that a grid product vendor has when trying to be as general as possible. But as a developer building a particular application, I can quite easily model many of my processes as CPU-bound (simulation, rendering, etc) or IO bound (tick data publication) and, without too much sophistication, reap some benefits.


  2. Indeed, most of today’s “job schedulers” doesn’t segregate the “Workload mgmt” layer from the “resources management” one.

    This technology is already available and has been proven since years in other verticals managing other resources type (ie: licences, concurrent access to storage or DB …). It is implemented as a fairly generic protocol to enrich the Resources allocation semantic.

    Hence, whenever Job workload manager is requesting an allocation from the resource broker, he just can uses the additionnal semantic to refine the criterias (energy consumption snapshot, CPU temperature, …).

    So it is not science fiction – the technology exists. Now it’s up to the vendors to integrate within this framework to promote their “Green” awareness and provide a end-to-end solution stack to the customers.

    Cheers,
    Gilles, Green-enabled

  3. Marc Jacobs Says:

    To be clear, I realize that current products can maintain an inventory of resource attributes, such as persistent configuration (OS, RAM, installed packages, subnet address) and transitory statistics (temp, free disk/mem, cpu/net load), and then search their resources against desired criteria. To the extent that you can properly phrase the question to the job scheduler (e.g. “I need a resource that runs Linux 2.6 with 4GB RAM, cool and wide-open net bandwidth”), you can get pretty inventive with task assignments.

    What’s more challenging with current technology, though, is a) framing the question in the first place, and b) understanding the time-dependent nature of the question.

    With regards to framing the question, sure, I can run a profiler over a history of runs and produce some averaged pictures of the resource demands of my tasks. Many of the distributed computing tasks I’ve worked on in finance, though, have had their performance picture severely impacted by the input data. On some days, they consume less resources (fewer iterations until convergence, fewer rows to aggregate, no special events that exercise exception branches); on other days, they consume more. Building either a model that predicts resource consumption as a function of input data or a system that can react and reschedule tasks given how the flock of activities are responding to input data in this particular instant is harder.

    With regards to understanding the time component, adding a time component to job scheduling begins to shift the scheduling problem away from supply/demand matching (relatively simple) to predictive bin packing (i.e. I’ve got several sets of tasks with distinct CPU, net, disk curves and I want to schedule the tasks across machines so that the resource consumption is maximized over a time horizon, not just at task start.) I’m not sure how hard that is, but it definitely harder than what products do now.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: