On October 1, Stratavia released version 4 of Data Palette. I have been looking forward to this event for the following reasons:
1. Version 4 extends Data Palette’s automation capabilities to areas outside the database - to cover the entire IT life-cycle. This extension centers around three core areas:
- provisioning, including server, storage and application provisioning;
- service request and change management automation (e.g., operating system patches, application rollouts, system upgrades, user account maintenance, data migration, etc.); and
- alert remediation and problem avoidance.
2. It expands Data Palette’s predictive analytics to storage management and capacity planning.
3. Further, it allows using these predictions, combined with root cause data and event correlating rule sets to enable decision automation for fast-changing heterogeneous environments.
I realize this release of Data Palette casts a rather wide net in the data center automation realm. However this is not mere bravado from Stratavia and it is certainly not mere coolness (“coolness” being a bunch of widgets that make for good market-speak, but don’t quite find their way into customer environments). If anything, it reflects the fact that data center automation is just not doable without a broad spectrum of capabilities. For an automation strategy to succeed, it simply needs to cast a comprehensive net, connecting all the diverse areas of functionality via a shared back-bone (the latter being especially key).
Let’s look at a couple of contrasts to understand this better. Take Opsware for example. They started out with basic server provisioning and then realized there’s more to automation. So they gobbled up a few more vendors (Rendition Networks for network provisioning, Creekpath Systems for storage provisioning and iConclude for run book automation) that couldn’t quite hack it on their own. Then before Opsware could expand any further, they ended up being bought by HP.
Another example is BMC – a relatively late entrant to the IT automation space. After wallowing in enterprise monitoring and incident management tool-sets for years, BMC made headlines a few months ago when it acquired run book automation vendor RealOps. It followed that up with the recent purchase of network automation provider Emprisa. I’m sure we will continue to see more purchases from HP, BMC and others as they continue to fill out the automation offerings in their portfolio (and try to stick them all together with bubble-gum, prayers and wads of marketing dollars!)
Buying a bunch of siloed (read, disjointed) tools does not an automation solution make! Why? Because there is no shared intelligence across all these tools. There is no singular policy engine commonly referenced by all of them. There are no central data collection, event detection and correlation abilities. When you define policies or rules in one tool, you often need to repeat the same definition in different ways in the other tools, significantly increasing maintenance overhead. When a policy or rule changes, one needs to go to N different places (N being all the locations where they are defined separately) – assuming it is even feasible to recall exactly all the areas that need to be updated. If you have more than a couple of dozen automation routines in place, good luck trying to reconcile what these things do, what areas they touch and what policies are duplicated across them.
In the case of Data Palette, this issue is addressed by its central Expert Engine architecture. This engine allows shared use of policy and metadata definitions. They get defined once and can be referenced across multiple automation routines and event rule sets. In other words, all the different areas of functionality and touch-points speak the same language.
Stratavia has dubbed its 4.0 release as the industry’s “most intelligent” data center automation platform. Obviously intelligence can be a subjective term. I actually see it as the most comprehensive data center automation platform of its kind (look Ma, no bubble-gum…).
2 comments:
Interesting article. While you make a good point about shared intelligence across toolsets, I would like some anecdotal evidence or examples of how a single policy can be applied across tools? Since the tools all have different functionality/scope, why would such a thing be even required?
Am I missing the obvious?
Let me provide a few quick examples of policies that need to be defined in one place but used across multiple areas or functions:
- Maintenance Windows: Monitoring needs to be disabled (paused) during maintenance windows, but maintenance-related automation routines need to run during this same window. So both multiple monitoring and automation functions need to be aware of these windows, especially when they change so they can behave/adapt accordingly. You don't want to define maintenance windows separately within the monitoring tool(s) and within the automation tool(s). There's got to be one central (shared) definition. And when the maintenance window changes, that single definition needs to be kept updated.
Other examples are action triggering thresholds (what are your yellow/red thresholds and what should happen when any of these are exceeded...), notification policies (who should be contacted when monitors need to fire an alert? Chances are, the same set of folks need to be notified when an automation routine fails...) and escalation policies. And how should these things change during different shifts, or when existing employees leave or new ones come onboard?
These are some quick examples, but there are several several more.
In short, automation is not something you do in isolation with a bunch of silo'd tools and scripts. A change in one area can have ripple effects both upstream and downstream. Having a central platform with a shared policy engine actually makes automation doable.
Post a Comment