BUSINESS

Public Cloud: Why The First Cloud Business Case Is Always Wrong

The good thing about the cloud: you pay for what you use. The bad thing about the cloud: you pay for what you use. What sounds like a contradiction can have fatal effects on the costs of cloud migration. If you are checking the use of the public cloud or are already using it, you can use the following procedure to reduce costs.

According to Bitkom’s “Cloud Monitor 2020”, a third of all companies in Germany are already using the public cloud, and another third are planning to do so. So even though there are already many cloud migration projects, I have never heard that the assumptions from the original business case turned out to be correct. The reports from customers and partners cover the whole range: “surprisingly more expensive” to “surprisingly significantly cheaper”.

Public Cloud: A Question Of Profitability

However, it is not an option not to question the profitability of cloud migration. However, you have to take two decisive factors into account: Firstly, you can assume that falling below the initially considered costs will be accepted, but that operation in the cloud should not be expensive later than originally supposed. Second, it is essential to note that the lowest cost of ownership for an application cannot be determined until the system has already been migrated to the cloud.

Be Careful When Comparing

While the first point hardly needs to be discussed, the worst-case analysis is sufficient for most cases, the second point is exciting. The most common method for determining the operating costs in the cloud almost always leads to incorrect results: the so-called “like for like” comparison of the existing environment with a future cloud environment. In this comparison, the server, CPU, memory and storage are quantified, and these results are entered into the public price calculators of the cloud providers. The result is almost always: The cloud is more expensive than the current one.

According to the Cloud Monitor, 91 per cent of all companies surveyed have found that their costs have fallen or have at least remained the same. Market studies by analysts see the potential for savings somewhere between 56 per cent (IDC study on AWS) and – due to exceptionally high savings in Windows systems – 70 per cent (Forrester study on Microsoft Azure). So why does your own “like for like” comparison deviate from this?

The problem: When a system was configured in the past, the manufacturer’s recommendations were followed. Better a bit more CPU power and a little more RAM. This is hardly noticeable in your own virtualized environment. In the cloud, however, you pay for everything you use. Even if you don’t need it, this means A virtual machine with 16 processors that are half fully utilized costs twice as much as a machine with eight processors that are fully utilized. However, the used power requirements of the application are identical.

Cost Reduction In The Cloud

But shouldn’t the massive economies of scale of the hyperscalers lead to significantly lower prices? Absolutely. However, you don’t just buy a virtual machine, and you believe in a service to operate a VM, including all service and support costs. In addition, the hyperscalers have to keep a lot of additional capacities available for the short-term needs of their customers, which are also included in the mixed calculation.

This is precisely where the secret of cost reduction in the cloud lies: you should have as few unused resources (so-called waste) in your cloud usage. In addition, those customers who help the hyperscalers to plan their use of resources can benefit financially. If you also know how to use the differences and the competition between the hyperscalers, you will always be able to undercut your original business case.

Don’t Neglect “Waste Management”

The first point, preventing “waste” through so-called “waste management”, is done by continuously checking the resources used. The CPUs and configurations of the hyperscalers are often more powerful than the previous hardware. This means that, depending on the application, the same performance can be obtained from less virtual hardware. This affects the costs of the virtual machine (VM) and the licenses used, which are often also billed according to the virtual cores.

It must also be checked whether applications with a low base load but high load peaks can be operated in cheaper burst machines. The use of so-called SPOT instances – remaining capacities that are sold following the marketplace principle – can also significantly reduce the costs for certain types of applications. The best way to save is, of course, in the public cloud if you can switch off workloads. Suppose this is not possible due to 24 × seven requirements. In that case, you can announce permanent use to the hyperscale via so-called “reservations’ ‘ and will be rewarded with up to an 80 per cent discount on the VM for better planning.

When comparing the hyperscalers, there can be apparent differences: VM is not the same as VM. For example, with some providers, local storage is included, which must be paid for additionally with other providers. The SLAs of the VMs can also be different and, depending on the provider, a second VM must be operated and paid for to guarantee minimal availability.

Calculating The Public Cloud Can Quickly Become Complex

Last but not least, there are differences in the licenses for operating systems and databases: Although significant savings can be made in license costs with all hyperscalers, the differences are substantial. In this way, vendor-specific licenses can be used simultaneously in your data centre and the cloud at no additional cost.

Calculating a correct cloud business case can quickly become complex. However, the various hyper scalers offer so-called cloud adoption frameworks that bring together experience from countless customer projects and make the aspects of people, technology and processes comprehensively tangible in an iterative process model. Anyone who approaches the cloud business case just as iteratively and establishes it as an operational process will be one of those who can leverage the savings potential of the cloud.

It has to be recognized that the discipline of economic efficiency analysis of cloud usage is still new for many companies. In the meantime, however, an active community and a “de facto” standard for this topic under the term “FinOps” (Financials + Operations, based on DevOps) has been established. The experiences and the book of the FinOps Foundation are very worth reading and are recommended as an additional source of information on this topic.

Also Read: Study On The Cloud Services Market: Two-Thirds Go To The Big Players

Tech Rushs

Tech Rushs is the place for next-level and talented Content writers, who want people to listen to them and admire the Trend.

Recent Posts

What If Artificial Intelligence Was The Solution To Performance Problems?

There is not an industrial sector or a company that is not being transformed today… Read More

4 months ago

How To Transform Your Store With RFID Technology

Although its logistics capabilities have been known for some time, RFID technology is now ready… Read More

6 months ago

ePrivacy Regulation And Cookies, The Importance Of The New Rules For The World Of Online Advertising

There is great expectation for the future reform of the ePrivacy directive, which concerns the… Read More

7 months ago

What Are The Steps To Doing Market Research?

How Many Steps Does Market Research Involve? The best technique for doing statistical surveying is… Read More

8 months ago

Digital Workplace: How To Meet New Challenges

On September 9 and 10, Silicon is organizing two days of web conferences to share… Read More

9 months ago

Maximizing Workplace Efficiency Through Integrated Security Solutions

Today's unpredictable business world presents serious security breaches and data theft threats as constant risks;… Read More

9 months ago