A recent article on Cloud Computing quoting IBM’s Ric Telford caught my attention. He boldly predicts that the term Cloud Computing will become obsolete over the next 5 years while the underlying methodology will become standard practice for IT. While this level of IT maturation truly seems a few years out, I find internal or private clouds to be increasingly prevalent in certain pockets of industries such as financial services, energy and managed services. On the other hand, IT departments in many other industries such as manufacturing, retail and telecom continue to struggle to enable such capabilities. The struggles have not emanated necessarily from lack of desire, budget or technology leadership, but from cultural challenges such as the following:
(a) Perceived shift in power dynamics. Internal cloud and self-service delivery models make it unclear regarding where does traditional IT Operations stop and end-users (service consumers) start driving the delivery process. Many IT Operations personnel fear that end-users will need to be granted administrator privileges to enable self-service and it will be difficult to control end-user activity. End-users will not adhere to well-defined IT standards and the requisite training and oversight will dramatically increase Operations’ workload.
(b) The lines between IT silos become blurry. When an end-user group asks for an IT service, those services are traditionally broken down into specific requests in the form of tickets and assigned to specific silos such as Sys Admin, Storage Admin, DBAs, etc. The requests are then individually delivered, often with lots of coordination back and forth with the final service being received and validated by the end-user team. While a cloud-based delivery model aims to improve such out-dated modes of functioning, it is also susceptible to severe internal bottlenecks and confusion regarding what is the right unit of delivery and who needs to own it: the application admins at the top of the tier, or the sys admins at the bottom. On the other hand, average service recipients do not care about the individual silos that make up their request; they speak in the "language" of the application whereas traditional IT Ops speaks in the language of servers, storage, database and other infrastructure elements. Thus the lack of a common denominator for requesting and delivering self-service capabilities in many organizations arrests momentum pertaining to the deployment of internal cloud capabilities.
In other words, the true barrier to enabling a private cloud is not technology, but cultural apprehensions that old world IT bridges by throwing more bodies and silos at the problem – repetitive meetings and water cooler conversations within different IT personnel and application teams provide a framework for delivery. It’s a lot of wasted effort, but the old world process delivers the services (albeit it takes days, weeks or months!)
CIOs that are ultimately successful in having IT groups embrace internal cloud and self-service models solve these bottlenecks in a way that’s amenable to the overall organization culture. Some CIOs do it via baby steps by retaining existing IT structure and policies and offering specific infrastructure components such as servers and databases in self-service mode to project teams. In other words, they attempt to transform IT by offering self-service capabilities on a silo by silo basis. However that is akin to a restaurant delivering specific pre-cooked ingredients to customers and expecting them to compile those ingredients into the requisite meal. Obviously such “self-service” (if it can still be labeled that) IT delivery models require the end-user to be fairly sophisticated in asking for the right resources and being able to assemble them all together themselves. This also assumes that the end-user is disciplined enough to ask for these ingredients in the right quantity (i.e., they are not wasteful or overly cautious). These assumptions rarely hold true causing the success of such internal cloud strategies to be stunted at best.
A better approach might be to take the time upfront to define a common denominator (across IT silos) and corresponding units for self-service – via a macro-level reference architecture. Take any candidate end-user request or task that can be delivered in self-service mode: e.g., application provisioning, code releases, creating data copies or backups, restoring snapshots, creating users, resetting passwords, resolving incidents, etc. All of these require a universal definition of what’s to be delivered to the end-user. All underlying nuances such as IT silos, their lines of control and associated complexities need to be abstracted from the end-user. The more comprehensively IT is hidden behind the curtains, the better is the private cloud implementation.
Based on my work with multiple IT organizations across different industries, I find that the most pragmatic unit for self-service comes from the top-most layer of the IT stack – the application. However utilizing application-specific units is easier said than done since in most large organizations (the kind that can really benefit from a private cloud), IT operations doesn’t speak the forementioned “application language”. Even if they do, there are several applications in use and most applications have multiple parts and pieces (i.e., they are all “composite applications”). In order to make application-specific units more palatable to all IT silos, the underlying descriptors of the units have to reference physical infrastructure attributes and map it to their corresponding logical (app-specific) definitions within the reference architecture.
Most applications that have been developed over the last decade are 3-tiered. They have a web server tier, an application server tier and a database tier, along with associated sub-components such as a load-balancer tied to a farm of web servers. Hence this popular 3-tier “application template" acts as a robust common denominator for IT. By building self-service capabilities around such a template, most applications within the enterprise are addressed. Of course, there are certain applications that employ just one or two tiers: e.g., server-based batch applications, client/server reporting applications and so on. As long as the tiers used by these disparate applications are a sub-set of the chosen application template, they are adequately covered. On the same lines, some applications employ a use of unique middleware components such as a transaction processing monitor, a message bus, queues, SMTP services, etc. Obviously such applications would fall outside the 3-tier template and would need to be dealt with separately. However with a bit of deeper investigation, I usually find that such applications are vastly outnumbered by the standard application stack that employs some or all of the 3 tiers named above. Hence such exceptions do not make an appreciable difference (in most cases) to the notion of using a single master application template in the reference architecture.
By applying the 80-20 rule, one can keep the application template relatively straightforward in the initial iterations of the private cloud implementation and focus on those applications and tasks to be delivered in self-service mode that will have the biggest impact. I have seen this approach work well even in larger organizations with multiple customers or lines of businesses (LOBs) because the underlying definition (of the makeup) of the application remains consistent across these disparate LOBs.
Once the master application template is defined, each unique application type can be assigned a profile that describes the tiers it employs. For instance, AppProfile1 (or simply, Profile1) can refer to applications that always have a web server, application server and a database instance, whereas Profile2 can refer to applications that utilize a 2-tier model (just an app server and a database), and so on. Using concepts of polymorphism, all these profiles should refer to the master application template such that any changes to the base templates will be immediately reflected across all profiles. During future iterations, additional templates and profiles can be defined to refer to applications that may have additional tiers such as a message bus. These definitions can be laid out on paper (and later implemented using an automation tool), or ideally, set up within an automation platform such as Stratavia’s Data Palette from the very beginning.
Regardless of the application profiles (the fewer the better in the initial phase), the deployment needs to adhere to one key point: keep the individual components for each tier hidden from the end-user while giving IT teams full control over those components. By “components”, I’m referring to the following:
(i) infrastructure layers: i.e., servers (virtual or physical), storage (SAN, NAS, etc., depending on IT standards and budget sizes), networking capacity, etc.; and
(ii) software layers: the web server, application server and database instance. Compared to physical infrastructure like servers and storage, each of these software layers often encompass a “best of breed” configuration across a broader variety of vendors, proprietary platforms and open source frameworks (such as Apache Tomcat, Microsoft IIS, WebLogic, Websphere, Oracle DBMS, SQL Server, SAP Sybase, VMware Springsource, etc.), and hence tend to be much more complex and time consuming for provisioning, configuration and ongoing management. Hence the overall success of the deployment calls for limiting the number of application templates and profiles during its initial phase.
Each application profile should have a description of what quantities of which specific components are needed by each tier - for initial provisioning as well as subsequent (incremental) provisioning. For instance, a ‘web server’ tier in the Sales Force Automation (SFA) suite may comprise a virtual machine with X amount of CPU, Y amount of disk space and a standard network configuration running Apache Tomcat. When an authorized end-user (e.g., a QA Manager) asks for the SFA application to be delivered to a new QA environment, the above web server unit gets delivered along with the corresponding application server and database instance – all of them pre-configured in the right units and ready for the end-user to access! However when an authorized end-user asks for more web servers to boost performance in the production SFA environment, what gets delivered is N units of only the web server tier, with specific pre-defined configuration to tie it to the corresponding (previously existing) application server and/or database. All of these mappings may be held in a CMDB (ideally!) or in some kind of operations management database and referenced at run-time by the automation workflows that are responsible for service delivery.
The application templates and profiles with quantitative resource descriptions allow end users to receive services in an application-friendly language, while allowing IT Ops to provision and control individual pieces of server, network, disk space, database, etc., using their existing IT standards and vocabulary. All underlying components blend together into a single pre-defined unit, enabling the well-oiled execution of a self-service delivery model within the private cloud implementation.
Using application provisioning as an example, the below graphic (click on it for an enlarged version) depicts how various application profiles comprising multiple software and infrastructure layers can be defined.
Regarding the other previously stated bottleneck (i.e., the perceived shift in power dynamics), while IT Ops personnel are behind the curtain, it doesn’t mean they are any less influential. Using automation products such as Data Palette, they are able to exert control both in the initial stages of service definition (along with the Operations Engineering or Architecture groups), as well as during the ongoing service delivery process by monitoring and tuning the underlying automation workflows (again, in conjunction with Operations Engineering) as depicted in the graphic below. In other words, they define and control what automation gets deployed and what is the exact sequence of steps for delivering a particular component of an application in the environments they manage. Specific product capabilities such as multi-tenancy, granular privileges and role based access control allows them to grant end-users the ability to see, request and perform specific activities in self-service mode without the need to also grant any administrative privileges they could potentially misuse.
As the IT Ops personnel continually assess areas of inefficiency, they can refine the workflows and roll out improvements with little to no impact to end users. Such efforts, while changing the job description of what IT Ops does on a day-to-day basis, help to deliver a better end-user experience helping companies leapfrog from the IT of yesteryears to the “industrialization of IT” that IBM’s Telford refers to.