Back to all Resources

The problem with public cloud spend

If like most you feel like you have no control over your cloud usage, continuously rising cloud bills and no idea why, read on.

Having spent the best part of the last 15 years working in cloud cloud consulting I like to think I know a little bit about public cloud and its challenges. 

One thing that comes up time and time again in conversation with those that we have helped move to cloud is the lack of control over usage, generally a response to continuously rising bills with no clear indication as to why. The real risk being, some of the major promises of moving to cloud (overall cost, pay for what you use, scale-on-demand) are proving to be elusive and your migration programme leaves you in the embarrassing situation of spending more than you ever were whilst on-prem. 

If the cloud costs remain unchecked it can raise eyebrows and kill the cost-saving rationale of moving to the cloud. So much so, in fact, that a fledgling but growing trend has arisen – ‘cloud exit’ – started by the SaaS business 37Signals, makers of BaseCamp. This is the process of moving back out of the cloud into your own data centres. In a direct rebuttal to public cloud companies claims, 37signals say that this process is going to SAVE them $7million over 5 years! 

Whilst I am not keen to drink the Kool-Aid and join the cloud exit party just yet, it should be something the public cloud companies keep half an eye on. For all the marketing brilliance that has created the groundswell for AWS, Microsoft’s Azure services and GCP it remains true that there are plenty of use cases which might not have the benefits described. As such, helping companies control cost should be a major focus, no matter how counterintuitive that may sound if the excess consumption is making them fat profits.

The question then comes, why are these cloud transformation programmes not reaping the benefits promised to boardrooms across the globe? 

I have a couple of observations I would like to make that may differ from the normal assessment of the problem. I believe the majority of the issue boils down to 3 key areas:


A Skeuomorph is represented, in digital design terms, by something that apes it’s real-world predecessor in a digital format (think a calendar or an analog style watch face on a digital watch). The reason for doing this is to provide instant familiarity to those who use them. And so it sometimes goes with cloud technology architectures. 

Whilst there are plenty of examples of businesses refactoring their applications to make them ‘cloud native’ – where you use true cloud concepts such as functions (think Lambda from AWS) or even half-native think ‘containerisation’.  There are plenty more that are, understandably, not ready to do this.

In addition, you have a group of very large IT businesses that have failed to recognise the difference between (and the skills required for) designing and managing physical data centres and operations and architecturing and managing cloud environments. Unfortunately, these happen to be the go-to suppliers for the majority of medium and large enterprises across the planet.

As such, what you often end up with is a Skeuomorphism of your data centre in the cloud. Focussed on huge amounts of data storage, a whole bunch of virtual (or even dedicated physical) machines running applications and databases and the occasional external physical firewall or switch at the edge, just to add complexity. 

How does this impact cost? Well most of it is set up and left on, end of. Sure, you might reserve instances of use savings plans but this is often a one-time gesture and the cost for your constantly growing environment spirals out of control over time.

IT Departments

Related to the above, we have to look at IT departments and IT departmental thinking as a whole. There are several issues I see here.

Firstly, through no fault of their own, as the people in these teams are often in the same talent pool as the large IT players you tend to get similar non-cloud native thinking and architectural designing from these teams.

Secondly, the habitual use of a small set of suppliers and heavy focus on who can provide the lowest possible % margin on ‘resell’ of product, causes these teams to contract their cloud spend in the same way as they always did historically, i.e. finding the person that will give them the best deal or pass them the most margin back form the public cloud players. In turn, this means the supplier is incentivised to provide the minimum viable service, which leads to little to no concentration on cost control or governance, with the ideal client being a set and forget billing arrangement which keeps growing month on month.

Lastly, and maybe most importantly, procuring and managing your entire IT estate through IT is generally a pretty bad idea. I realise that different teams procuring it across the business is also not a perfect world scenario either; however, the issue is one of understanding and visibility (otherwise called governance).

The normal thinking would be that a) it is technical and b) it requires oversight so vis-a-vis IT should own it. However, what that does is, it allows all the teams across the business to piggyback on the new arrangements and development teams (sitting elsewhere in departments, digital or even 3rd parties, to build out what they wish; Leaving IT with no idea what is needed and what is not. 

Which brings me to my last observation….


Forgive my generalisation, but in my experience developers like to build the absolute coolest thing they can, are not generally responsible for cost management and are always busy.

This then results in over-provisioned resources across environments, multiple non-production environments and quite often the odd replica DB or huge backup being left on. It is very rare that a clean up will be performed and the IT department will not have any clue as to if those resources are redundant or being used for a production system.

What is the answer?

I am not in the cloud exit camp just yet; I still believe that cloud is the place to be for the vast majority of use cases and benefits can be derived, even when running a legacy architecture. Whilst cost is the consideration we are looking at here, there are other benefits besides.

The current approach from the FinOps providers and consultancies is to get decisions and budgets into the development teams. But in my experience, this is not as easy in practice and often leads to yet more centralisation of control, not less. They also tend to focus on maximising use of reservations and savings plans as the key cost saving consideration. Whilst effective to some degree, it does not address all of the challenges I have outlined herein. 

We created our product ‘Turn It Off’ as a product to limit the CO2e output of those businesses using cloud computing; however, we have quickly found that the other superpower of our technology is that it can save businesses large unwanted bills by addressing 2 of the 3 challenges described above in a different way, one which centres cloud spend around individual applications and resources.

Whilst it does not affect architectural decisions, it can give greater control of costs to developers by allowing them to identify what resources are essential production and which are non-production and then doing the hard work for them, turning them on and off as required using schedules, metrics, and even usage. If the use of the Turn It Off technology is mandated as part of your organisational stack you automatically get built in cost savings AND reduce your carbon footprint at the same time, helping you stay in the cloud and get you a little closer to the nirvana we were always sold.

You can learn more about Turn it Off and our story here.