If “Cloud” is internet based computing, in which large groups of remote servers are networked to allow the centralisation of data storage and online access to computer services or resources, then “SDDC” is the method through which cloud services get delivered most efficiently.
The foundation of the cloud is based on the concept of converged infrastructure and shared services. Cloud resources are dynamically reallocated per demand. Cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance.
“Moving to the cloud” refers to moving from the traditional CAPEX (capital expenditure) model where dedicated hardware is procured and depreciated over a period of time to the OPEX (operating expenditure) model where cloud infrastructure is used and paid for as it is consumed.
Availability of high capacity networks, low cost computers and storage devices and the widespread adoption of hardware virtualisation, service-orientated architecture, automatic and utility computing has led to a growth in cloud computing.
Physical servers were under-utilised and often undertook one task. Virtualisation has allowed numerous virtual machines to be hosted on one physical server with the consequent reduction of costs. Virtualisation is basically the masking of server resources, including the number and identity of individual physical servers, processors and operating systems from the server users. Software allows one physical server to be divided into multiple isolated virtual environments.
SDDC extends virtualisation concepts to all the data centre’s resources through process automation and the pooling of resources on-demand as-a-service. Infrastructure is virtualised as a service ensuring that applications and services meet capacity, availability and response time.
Operator facing APIs (application interfaces) allow the automation of tasks that were previously manual. Infrastructure configuration is managed by software allowing the continuous support of legacy applications whilst utilising new cloud services.
Simply it means the ability to aggregate and pool infrastructure resources through automated tools allowing the allocation of storage, security, networking and compute very quickly and their return to the pool once complete.
- Network: resources are combined by splitting the available bandwidth into channels independent from one another and each assigned to a particular server or device in real time. This enables network hardware consolidation, bandwidth resource pooling, and redundancy
- Storage: physical storage is pooled from multiple devices but appears as a single storage device and is managed centrally
- Compute: server resources are masked from server users while increasing resource sharing and utilisation providing resource automation, high availability and mobility
- Security: protection is based on logical policies that are not tied to any server or specialised security device. Adaptive, virtualised security is achieved by abstracting and pooling security resources across boundaries, independent of where the protected asset might be currently residing and making no assumptions that the asset will remain in that location
Marc Andreessen, the founder of Netscape, wrote in the Wall Street Journal “six decades into the computer revolution, four decades since the invention of the microprocessor and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale”. Thus, when looking at the future of data centres powering the cloud you have to conclude that software is taking over.
A true SDDC is hardware agnostic in terms of managing workload. Each piece can be managed independently from the other, enabling administrators to deploy what’s needed to the right place, on the fly.
Benefits:
- Agile : minutes to reconfigure data centre resources
- Fast application provision
- Flexible : run new & existing applications in multiple platforms & clouds
- Simple : negates the need for specialised hardware or language
- Improved efficiencies : extension of virtualisation through data centre
- Improved application availability through control
- Security through policy-based governance
- Reduction in energy usage as hardware use is more efficient
- Hardware independent
- Scalable with built in redundancy
- Silky-smooth end-user experiences
- No need to rip and replace infrastructure
- Improvements ongoing through software updates
Data centres of the future are expected to be software defined where every component can be assessed and manipulated through an API. The proliferation of APIs will change the way people work. Programmers who have never formatted a hard drive will now be able to provision terabytes of data. A web application developer will be able to set up complex load balancing rules without ever logging into a router. IT organisations will start automating the most mundane of tasks.
Web-scale IT
Infrastructure is becoming increasingly diverse with commodity hardware, open source software, home grown provisioning and management software that makes infrastructure difficult to manage at scale. Many steps are still done manually, are inefficient and error prone.
“Large cloud service providers such as Amazon, Google, Facebook, etc., are reinventing the way in which IT services can be delivered,” said Cameron Haight, Research Vice President at research firm Gartner. “Their capabilities go beyond scale in terms of sheer size to also include scale as it pertains to speed and agility. If enterprises want to keep pace, then they need to emulate the architectures, processes and practices of these exemplary cloud providers.”
The term “web-scale” was introduced by Gartner to describe new processes, architecture and policies introduced by the internet giants to achieve agility and scalability. It named web scale as one of the top ten strategic technology trends of 2014 and 2015. It predicts that 50% of enterprises will be adopting web-scale IT as an architectural approach by 2017.
Web-scale describes the tendency of modern architectures to grow at far greater-than-linear rates. Web-scale systems are able to handle rapid growth efficiently without bottlenecks that require re-architecting at critical moments. The technology allows companies to scale to massive compute environments.
Getting web-scale IT right means moving to the next level of infrastructure automation, the level that understands the requirements of applications and responds to those requirements in real time – a software defined environment.
Web-scale IT is more than just a buzzword, it is the way data centres and software architectures are designed to incorporate multi-dimensional concepts such as scalability, consistency, tolerance, versioning etc.
Benefits
- Speed
- Agility
- There are no specialist machines doing one thing
- Provision of elastic services
- Highly scalable distributed systems
- Expandable and continue to function normally as one unit instead of relying on multiple deployments of functional units that are not scalable
- Provide programmatic interfaces to allow complete control and automation
- Everything is controlled by software running on standard x86 hardware
- No single point of failure
- No bottlenecks
- Meet Service Level Agreements
- Expect and tolerate failures while upholding promised performance
- Web-oriented architectures allow developers to build very flexible and resilient systems that recover from failure more quickly
- Tolerance of failures is key to a stable, scalable distributed system and the ability to function in the presence of failures is crucial for availability