A journey from a Computer to Software Defined Data Center

The modern SDDC took its shape with the advent of time. The First mechanical Computer invented in the early 19th century bore little resemblance to computers of today, either in appearance or performance. The first-generation Computers known as Vacuum Tubes, used to be as big as a Datacenter and were very expensive. Businesses would rent out a space on the mainframe to fulfill specific functions and it required a lot of power and space for maintenance.

In 1980’s, microcomputers were introduced to reduce the space required to maintain a big Datacenter . This was introduced by IBM at an affordable price and they named it as PC’s (Personal computers). Over a period of time, Information technology operations have grown faster and bigger and also complex to manage and control. Servers evolved from Microcomputers to address the needs of growing businesses and computer rooms turned into server rooms (which are now popularly known as Datacenters). In order to accommodate increasing number of servers, businesses started to invest in building larger Datacenters.

In the course of time, IT professionals couldn’t run away from the fact that they do not use the server hardware to the fullest potential as they run just one application per server. The concept of Virtualization was introduced to solve this issue of hardware utilization.

Though IBM virtualization was hosted on mainframes, it was not flexible and economical and had its own limitations to meet the next generations’ needs. X86 Virtualization was invented to address this challenge and improve the hardware utilization. X86 virtualization created hypervisors which took complete control of CPUs, memory, disks, file storage, applications and networking. These resources were then offered to users or customers in the form of agile, scalable, consolidated virtual machines.

X86 Virtualization has brought in great control over growing IT needs and injected agility into Datacenters[. With this virtualization, customers are able to save their investments on compute and also the space, power and cooling costs in the Datacenter.

IT industry addressed the problem of growing hardware needs through Software Defined Compute and then focus shifted towards addressing the siloed storage and IT control of the same. Solutions were built around storage consolidation for which SAN and NAS storage models were introduced. Technology has since transformed and we have now moved to unified storage models.

3-Tier architecture (Computer, network and storage tiers) has a lot of challenges in terms of Manageability, Visibility and IT control. 3-Tier architecture is built with multiple hardware, multiple software vendors and multiple ownership. When a technical issue arises, it is not owned by one vendor and the customer has to raise multiple tickets with multiple vendors and get the issue resolved. This leads to waste of a lot of quality time and effort of IT team in troubleshooting these issues.

In order to address these challenges, Converged and Hyperconverged infrastructure solutions were introduced into the market. Since converged infrastructure does not have a small deployment option and is expensive, only the enterprise customer could afford these models. Hyperconverged Infrastructure has solved this challenge and made it accessible even for SMB customers.

Hyperconverged Infrastructure was earlier introduced as an appliance. As the market dynamics changed, IT professionals realised that Hyperconverged infrastructure is all about Software defined compute and storage. It can not only be deployed as an appliance but also can run on top of existing x86 Hardware. HCI has eliminated the need of expensive SAN hardware and brought in ease into IT operations and increased manageability. IT teams have started focussing on innovation in their core jobs instead of troubleshooting jobs.

Let us address a few most common questions related to HCI infrastructure:

Does HCI alone make a software defined Datacenter (SDDC)?

No, we are missing one of the core elements in the Datacenter  i.e “Network”. We have to add software defined networking as well to build a complete SDDC. When we combine S[oftware defined compute, storage and networking along with a strong monitoring and orchestration layer, it forms a complete Private cloud for the SDDC.  

Why do we have to build SDDC instead of a traditional Private cloud?

Have you ever seen Google, Amazon, Netflix, Facebook and others, going down?

No right, because it has been built on a web scale IT architecture which is defined by software and the underlying hardware can be any commodity hardware. The same experience is offered to our customers in EnCloudEn , the one and only Indian Company to provide such experience on a global scale to many of our customers.

Web-scale IT methodology pertains to designing, deploying and managing infrastructure at any scale. It can be packaged in a number of ways to suit diverse requirements and can scale to any size of business or enterprise . It is not a single technology implementation, but rather a set of capabilities of an overall IT system.

EnCloudEn considers the following properties in defining Web-scale infrastructure.

  • Hyper-converged Infrastructure on x86 servers – integrated compute, storage and networking.
  • Distributed everything – cluster-wide data and services.
  • Self-healing system – fault isolation with distributed recovery.
  • API-driven automation and rich live analytics

In the Journey of Computer evolution, we have reached a great milestone and we are still very curious to follow our craziness to be inventive and innovative. That’s what drives us at EnCloudEn to always bring the best products into the market at an affordable price for our customers and provide innovative solutions to their IT Infrastructure needs.

-Blog written by Siva Prasad Kotipalli