Picture: SUPPLIED/IOCO
Loading ...

In the beginning, there were mainframes: big, expensive computers that could only do one thing at a time. The personal computing revolution gave us each a mainframe on our desktop. The development of servers and networks allowed these desktop computers to offload some of their work, and then virtualisation allowed all these bits of hardware to work much more efficiently together. The way had been cleared for the cloud as we know it today.

Cloud allows for the efficient use of distributed hardware and, crucially, does so on-demand. If you need an hour of Google server time, you pay for that hour. Cloud is convenient, doesn’t require you to own physical hardware, and allows for flexibility and agility. 

Hyperscale cloud — Amazon Web Services (AWS) calls it elastic compute — is the next step in cloud computing. Hyperscale allows resources (memory, networking, storage and compute) to be efficiently and rapidly added or removed from a pool that an application can draw from. In this context, scale is effectively limitless. Applications reach their functional limits long before they run out of resources.

For applications with the right adaptations, hyperscale cloud is a great environment to inhabit. They can grow or shrink instantly, on-demand. They can augment their capacity in real-time, self-diagnose and self-heal. They effectively have access to any resources that they require, as and when they require them.

Consider a large online retailer. Demand for its services ebbs and flows, increasing at month-end and over the festive season. If its retail platform is designed and built to use  hyperscale cloud, it ebbs and flows alongside demand, meaning that the platform itself conforms to its requirements, resulting in previously impossible efficiencies.

But hyperscale is not for everyone. It can be a complex, demanding place if you’re not prepared. Traditional virtual or hosted cloud platforms are relatively simple and easily managed. Hyperscale, because it is new and powerful, is an incredibly fertile environment, with thousands of new services launched each year. To stay abreast of the ever-changing capabilities on offer you need focused technical functions, and resourcing and skillsets can be a challenge.

In addition, some legacy applications are still at home on the ground, and simply don’t work well when they’re decoupled from physical infrastructure. There are options to bring them up to speed — the so-called six Rs (remove, retain, replatform, rehost, repurchase and refactor) — but this conversation must take place in the context of your organisation’s objectives.

Hyperscale is necessary to run Google’s search platform. It is not necessarily the right choice for a small company looking for the best way to grow their business. Different applications require different environments to thrive. And hybrid approaches, taking the best of what each environment has to offer, are models increasingly adopted by clients.

Maintaining a competitive edge depends more and more on the ability to pull business functions together in an interconnected ecosystem. The good news is that managing these hybrid environments is getting easier thanks to the proliferation of new tools. Many AppDev houses are building orchestration engines, and management and reporting tools, so they can give customers visibility and control over their applications, no matter the environment or combination of environments in which they operate.

The upshot is that organisations looking to transform digitally are spoilt for choice. Regardless of which cloud environment you have, iOCO can ensure that accessing, managing, and provisioning services is simple. That means businesses have the luxury of making the conversation about organisational strategy, rather than about an IT problem that needs solving.

For more information visit the iOCO website.

This article was paid for by iOCO.

Loading ...
Loading ...
View Comments