What Is Your Technology Really Costing You? Why Even Your CFO May Not Know


Your CFO likely doesn’t know how much your technology is costing your company. While considering the annual costs of hardware, software, cloud storage, technology staff, and a host of other factors, it’s been my experience that one big contributor to cost is going unrecognized – the Total Cost of Applications, which is made up of the Total Cost of Code and the Total Cost of Data.

Let’s unpack this sentence one piece at a time, because this is important information to know whether you’re a CTO, CIO, or even a DBA, and yes, especially if you’re a CFO.

What is the Total Cost of Applications and why does it matter?

Adding a new server, buying a new business application, or migrating to the cloud rarely, if ever, costs the stated price. Every time you implement a change to the technology stack there are additional costs in time, labor, resources, licensing, etc. There is a profound lack of understanding about the limitations of existing system resources to handle these changes, further compounding these unforeseen costs. Adding to this cost is company growth—more data and more transactions cost more with today’s pricing models. What’s more, if the whole ecosystem is built on an unhealthy and inefficient system to begin with, over time, the results could be disastrous, both from a functional perspective as well as a financial one.

Under-resourcing for capacity can be financially catastrophic for businesses that can’t accommodate a surge in traffic to their websites, apps, or ecommerce platforms. But the tendency has been to oversize capacity, just in case. This added bloat has gotten so out of hand that no one is paying attention to the costs of infrastructure, licensing, and operational costs that go along with this “safety cushion.” But if we could measure the health and capacity of applications and servers almost in real-time, then we can create efficiencies while still being able to process the necessary business transactions.

There needs to be more attention paid to the Total Cost of Applications, meaning the Total Cost of Code and the Total Cost of Data, not just today but over three, five, and twenty years. Why?

Inefficiencies exist in every system, period. Billions of dollars are wasted every year due to a lack of attention, awareness, or expertise in identifying waste within technology platforms and applications. The issue is not one of simply cutting costs, but rather proactively taking costs out of your applications and systems in the first place and then changing the culture and expectations on applications before the code is promoted to production. I’ve seen this play out countless times with my clients at Fortified. Unfortunately, by the time they come to us, the costs they’ve run up due to this scenario have been astronomical. Luckily, in every case, we are able to help them out.

Why doesn’t anyone know the total cost of their applications?

Technology leaders today don’t fully comprehend the Total Cost of Ownership of all the different technology platforms they’re migrating or purchasing, especially when they go to a utility, or pay-for-what-you-use, model, which is how they pay for compute, or software, in the cloud. Conventional wisdom dictates “you have to be in the cloud,” but then there is sticker shock once you get there.

Additionally, and don’t take this the wrong way, but developers have no clue when it comes to money, CPUs, or other infrastructure resources. They’re too busy with the hundreds of features they have to develop and get out to production within a given month. Just like the hamster on a wheel that doesn’t realize he’s not going anywhere, they’re not thinking about how unscalable the current model is—that they will never get a handle on their workload so that they can become more efficient and save costs. It’s not their job to think about efficiency.

At the same time, infrastructure teams do not know how to code so there is a gap between developers and infrastructure teams when it comes to truly understanding technology needs, which contributes to increasing costs.

Enter FinOps, a combination of Finance and DevOps, to save the day. FinOps is an evolving discipline and practice that calls for better collaboration between engineering, finance, technology, and the business when it comes to technology spending. Its raison d’etre is to have more transparency into the true costs of technology so that we can finally begin to rein them in. While a giant step forward in terms of progress, FinOps is mostly a nice theory that has yet to be fully embraced or articulated. FinOps does not consider all the different cost drivers such as bad code or tech-debt, and the current tools and services in the market do not fully leverage operational data that would allow FinOps teams to identify the root causes of increased costs. Instead, they stop at the resource level (servers and devices) and do not go deeper into the code level.

So FinOps as a solution is not scalable unless we start to better instrument the work we do before we put things into production in order to understand what the actual cost will be. Hence, we need the Total Cost of Code, which is one of the key building blocks. If we understand the cost of each piece of code, as well as the cost of data, we could add those costs up and predict Total Costs of Applications in the future.

How can we better understand what our technology is really costing us?

Everyone in the technology industry is trained to look back on what they’ve created and how it’s working, even how it can be improved. But no one is looking forward beyond the next shiny new object in IT. We need to look forward when it comes to costs. Let’s take a stand to consider the future charges of what we are building, especially the ones that are not so apparent from the onset, i.e., pay-as-you-go cloud costs. My mission is to see a day when every piece of code that’s developed has a price tag on it for how much it’s going to cost before it’s even put into production.

Technology budgets are reactive because everything is changing so quickly. As more shifts to the cloud, traditional budgets are becoming meaningless, nice artifacts in the rearview mirror of an enterprise that is driving forward. By instilling the mindset now, it’s conceivable that there will be a day when we can budget for technology with a greater level of accuracy before we put anything into production across all our different services.

Optimizing code and processes frees up resources, not just today but long term, as code does not run just for one day. In Fortified’s experience, virtually any system has the potential to realize capacity rationalization by at least 30 to 40 percent and code optimization may provide another 40 to 80 percent of cost savings. This means the same workloads can be run on smaller machines, reducing cloud and licensing costs. The value of these savings is not simply short-term, but over the years the application is in use. It’s not just a matter of the bottom line, it’s the consideration of what could be done with this freed-up capital to further business KPIs. While a must-know for the CFOs, this should be top of mind for anyone who works in technology as well.

Think about your top applications and processes for your mission-critical applications and ask yourself, “Can the top five statements in this application be 20% more efficient and if so, how much money would it save over the next five years if we optimized them today?”