Saturday, May 24, 2014

Intel's Four OpenStack Pillars For Private Clouds

Intel's Four OpenStack Pillars For Private Clouds

Unless you have been living under a datacenter technology-free, rock, you know that this week is the OpenStack Summit 2014 in Atlanta, Georgia. OpenStack has fast become the open standard for what I consider a "cloud operating system." Every one of the infrastructure giants, Dell, HP, and IBM, have their own strategies for OpenStack, primarily focused on the enterprise's private cloud. What you may not be aware of is that Intel has made enormous investments and developments in the OpenStack framework. While this is a somewhat detailed view of what Intel is up to, you can find a deeper dive in our free white paper, entitled Intel SDI and OpenStack Tackle Private Clouds.
While a lot of the industry dialogue has been mostly about the public clouds like Amazon Web Services, Google Cloud Engine, and Microsoft Azure, enterprises can't just flick a switch and start deploying to a public cloud. There are major hurdles that need to be overcome like potentially the need to rewrite an application because the platform isn't supported, data security or data legalities, incompatible virtualization schemes, complexity with existing systems, etc. Most enterprises are looking at a hybrid approach, starting by building their own OpenStack-based private cloud that bursts to an OpenStack-based public cloud. This appears to be the middle ground enterprises are landing on. Intel is making some very major 's vast investments and focal areas of OpenStack's private clouds, too. Why Intel? Intel has approximately 95% share in servers, around 50% in storage, and 10% in networking. By investing to get ahead of the development curve, Intel is hoping to not only grow the size of the datacenter pie (TAM expansion), but also maintain their dominance in servers while increasing share in storage and networking. Intel knows that massive gains and risks happen in industry transitions, and OpenStack and cloud-based computing are key enablers of this transition.
So what is Intel up to and what are their OpenStack focal areas for the private cloud? This may seem a bit techy, but please have patience and read on.
"As a Service" capabilities for decoupled application development
Application development environments need to be simpler and enable more reuse of code, data and business logic. Application developers quickly are discovering the benefits of decoupled modular logic using programming languages like Erlang, Go, Python, and Ruby. This new development approach allows developer to focus on the logic needed to create value and to use CI/CD (continuous integration / continuous delivery) principles to integrate more common, off-the-shelf capabilities. Amazon has done an excellent job of providing a catalog of services to support a decoupled application development methodology that helps developers quickly get their applications to market. Intel is working with the OpenStack community to improve the robustness of application services being developed in projects like Marconi (messaging), Trove (database), Ironic (bare metal provisioning), and Sahara (data analytics).Smarter processor, memory, network, storage scheduling
You have to be an expert today to determine the right amounts of processor, memory, storage and networking for your cloud. To achieve widespread adoption of the cloud, the OpenStack community must work toward better enumeration of resources (compute, storage, and networking) to help with smarter allocation and appropriate levels of provisioning. Today's provisioning templates (e.g., AWS Cloud Formations, OpenStack Heat) require the user to specify manually what types of storage, how much memory, and how many vCPUs are needed when deploying a new virtual machine. Most system administrators do not have intimate knowledge of the optimal resources required for each specific application, so this approach may result in over-provisioning of resources, which adds unnecessary cost, or under provisioning, which may limit application performance. Current scheduling methods also require deep knowledge of the underlying infrastructure and available hardware pools. Managing these resources is a labor-intensive, arduous task for administrators with dynamic and heterogeneous datacenter environments.
The vision for Intel SDI is to allow a user to define a set of characteristics that an application requires (key value store, persistent storage, etc.). The infrastructure responds by assigning an optimal set of available resources and dynamically adjusting allocations over time to meet scale and Service Level Objectives. This offers an improvement in user experience for application developers, who will no longer have to write scripts that "hunt and peck" in search of the infrastructure profiles that run their applications most optimally. Also, in the case where the desired resource is not available, the orchestration layer will understand acceptable substitutes (for example, if a vGPU is requested, can a virtual GPU running on a CPU be substituted?). Platform as a Service (PaaS) projects like Cloud Foundry and OpenShift are providing some capabilities in this area, but the available tools are still relatively challenging to use and not widespread.
The OpenStack community is aligned around several projects to lay the foundation for more intelligent scheduling of resources.
  • Openstack Heat Project / TOSCA. Existing templates that specify how to create the appropriate VM resources for an application are rudimentary and require significant knowledge about the underlying hardware resources. The community is focused on strengthening these template capabilities and incorporating a more automated configuration element.
  • Enhanced Platform Awareness within OpenStack Icehouse. Agents on a Nova Compute Node now can provide LSPCI and CPUID enumeration into the Nova Scheduler database for better insight into the entire system. This enumeration allows for easier creation of flavors of instances and the ability to make calls for specific capabilities.
  • Graffiti. Intel is working jointly with HP and the community on this project. Graffiti will take enhanced platform awareness to the next level with cross-service "metadata tagging" and search aggregation for cloud resources.
  • Specialized acceleration. Intel is working with the open source community on acceleration techniques to improve performance and enhance predictability. Intel's AES-NI instructions can be used to provide higher and more predictable performance for data encryption. DPDK vSwitch, developed by Intel and Wind River, is an Open vSwitch enhancement designed to boost packet switching throughput significantly.
  • Swift / Cinder. Future efforts via these projects will address improved intelligent scheduling in object and block storage.
Enhanced private cloud telemetry
So once you have the right processor, memory, storage and networking for your app, nothing changes, right? Not so, and what's required is a pulse on your cloud's heartbeat to shift resources as required. Although intelligent scheduling is effective for the initial provisioning of instances, it is important to ensure ongoing resource optimization as workloads and environments change. Enhanced telemetry can identify how variables change over time and can make decisions on appropriate allocations. Using policy-based workload placement (beyond just feature-level details), the orchestration layer makes real time decisions about where a workload should be placed or moved to if those resources become overcrowded. As a step on the path toward enhanced telemetry, Intel is working with the OpenStack Ceilometer project to enable access to - and interpretation of - the information provided by the telemetry data in the platform silicon.
OpenStack maturity via improved availability
Your private cloud is only as good as its availability. Think of the chaos that ensues when Gmail or AWS goes down and now think of your enterprise's private cloud. Intel's Software Defined Availability (SDA) separates decisions about how to configure datacenter infrastructure for availability from both applications architecture and underlying hardware and software infrastructure. The only substantive requirement to implement SDA is software-configurable lower layers of infrastructure. SDA depends on software defined layers that are implemented using standard or easily-accessible application programming interfaces (API). OpenStack's standard APIs to the Nova compute engine, the Swift storage system, and others make this framework an ideal fit for an SDA implementation.
In its current state, OpenStack still has a number of shortcomings in the area of availability that may deter some enterprise customers from moving to the private cloud. For example, OpenStack currently has some limitations in VM live migration - a capability commonly used in enterprise environments to move VM instances without interruption when maintenance on underlying hardware is required. To improve availability for OpenStack clouds, Intel and other leading infrastructure providers are applying their enterprise expertise and learnings from direct interaction with hyperscale and cloud service datacenters.
  • Infrastructure Services: It is critical to have high availability at the infrastructure services layer to meet the SLAs enterprise customers expect. Infrastructure providers are working with projects like the Pacemaker cluster stack, HAProxy for TCP/HTTP load balancing, Galera for MySQL, and Keystone for authentication to help make these services more enterprise-ready.
  • Application Services: Decoupled applications are built to be node fault tolerant. However, many traditional enterprise applications are not node fault tolerant and require certain levels of availability at the VM layer. For example, if a compute node dies, enterprise users require the ability to spin up that instance somewhere else in a seamless way with only a short service level interruption. Improvements to VM image management in Glance, block storage high availability in Cinder, and networking high availability in Neutron will help provide the availability capabilities needed for enterprise customers to consider moving to private cloud.
OpenStack: one of Intel's biggest cloud bets
As you can see, Intel is serious about OpenStack for private clouds as well as public clouds. If Intel can successfully make their platform look better from an OpenStack point of view, this then puts a lot of pressure on IBM's OpenPOWER (tied to IBM POWER8 and Power Systems), ARM and Oracle's SPARC Systems. Hardware is one thing, but software is just as important, and Intel knows this.
You can find Patrick Moorhead, President & Principal Analyst of Moor Insights & Strategy on the web, Twitter, LinkedIn and Google+.
This column contains significant contributions from Paul Teich, Moor Insights & Strategy and CTO.
Disclosure: My firm, Moor Insights & Strategy, like all research and analyst firms, provides research, analysis, advising, and/or consulting to many high-tech companies in the mobile ecosystem, including Intel, cited in this article. No employees at the firm hold any equity positions with any companies cited in this column.
Evernote helps you remember everything and get organized effortlessly. Download Evernote.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.