Last week, I took a look at the impact that time can have on the assessment of a project’s return on investment (ROI). It’s something to keep in mind both when you’re building your own business case, but also–and perhaps more importantly–when you’re reading discussions of ROI in the market. The same caution extends to the topic of data sources as well. Even if you’re comfortable that the costs and benefits have been profiled over the right time span, you may still be lead astray if you’re unclear on where the underlying data came from and/or how it was calculated. Let’s take a look at some examples.
How is the data stored?
It’s no secret in the financial operations arena that we’re still far short of 100% automation. One of the benefits of systems-based processes is that we (usually) have some degree of access to data to help us figure out how we’re performing. Some applications display metrics like current invoice queues, number of exceptions, total processed, etc. right in a dashboard. For others, we might need some tech help to gather that information by querying the system logs for different transactions. Even in that more tech-intensive environment, we could get a decent idea of our throughput, productivity, and error levels. But we’re not all using these systems. Even for those that are, the level of detail captured may not be as precise as we’d want in an ideal world.
And how about for those of us who are still processing things like outgoing or incoming invoices and payments manually? We might have an idea of the number of total transactions over a particular span of time, since they’d need to be recorded somewhere in our accounting system. For measurement’s sake, though, knowing that we processed 250 incoming payments over the course of a week doesn’t provide much insight into the level of work required to handle them — and, thus, not much of a view into the labor costs incurred. So if someone comes to us to ask how much it costs us to manage a certain type of transaction, we might honestly respond that we don’t know. Or we might guess, which is a fairly common approach. But is that bad? It can be, as I’ll discuss next.
How are the metrics calculated?
When we start talking about calculating metrics, we begin running into some problems. First, let’s look at this from an individual company’s perspective. As above, they have access to some amount of information based on their systems and record-keeping practices. Say that they process incoming invoices within Accounts Payable, and that’s the first time any information about them enters the system. They’ve been received and routed by the mailroom–maybe even prepped for scanning–but they don’t show up in an operational system until AP sees them. For this company, any measurement of invoice processing time only has one possible starting point: when the invoice is received by AP. In a company that does centralized processing and document scanning within the mailroom, invoices may have an electronic record on the date of actual receipt. When they calculate elapsed processing time, they may have multiple choices: when it was received initially, or when it made its way to AP.1 Because these two firms have access to different information, they have the potential to calculate their metrics differently. And that assumes the data was accurate in the first place.
So we can see that even when information is available, that doesn’t guarantee that my numbers will be determined the same way as yours. But if we both report out own numbers as part of a benchmarking project, that nuance may be lost. That problem is compounded by something mentioned a bit earlier: guessing. If I don’t have the data to actually calculate my metrics, I may look for something in the ballpark. If I process 30 invoices a month, does it take one full day per invoice? Probably not. The more likely explanation is that my job entails a lot more than processing invoices, and I’ve found other ways to keep myself occupied. In high-volume operations where one or more full-time employees (FTEs) focus entirely on invoice processing, then that figure could be much more accurate. In between those two scenarios is a very large gray area. If one company processes 30 invoices per month and another processes 3000, the second may not be 100 times quicker on an individual basis. The pitfall here is that if you model time-savings based on these faulty assumptions, you may end up thinking you’ll save labor costs without eliminating the actual tasks that take up your staff’s time.
How are the questions asked?
This is a fun one, and it applies to survey-based metrics, whether gathered on paper, by phone, or online. It’s also where these issues come together. Some surveys will ask how much it costs to process an invoice or a sales order, and give you an open text box to reply. They might provide a bit of guidance on the context, say whether to include just invoice review and approval or to add payment as well. Some surveys will present a drop-down menu with cost ranges. Perhaps you know your exact figure and choose the right category — or maybe you see the choices as a reasonable spectrum of performance and choose higher or lower based on your gut feel. The good thing about these types of approaches is that, while imperfect, they focus on a granular level of information. And while they may not hit the number right on the nose, by aggregating responses from tens or hundreds of companies, they can smooth out a bit of the variance. In a perfect world, we’d have access to everyone’s performance and employment data to know exactly how long each tasks takes and what that means in hard-dollar costs based on the salary or hourly rate of the person doing the work. But that’s not very realistic, outside of closed peer groups with NDAs and the like.
That said, not all data gathering efforts look to uncover the low-level, granular data to understand individual processes or process components. And that’s where the trouble lies. If I know that companies who do X (where X could be a following a certain process flow or using a certain type of software) report that it costs $1 to process a document, and those not doing X report that it costs $4, I have some good information to work with. If I am not yet doing it, and I’ve calculated that my costs are right around the $4 range, then it would be plausible for me to think that by investing in X, I could save $3 per document. There are a lot of assumptions underneath to test, but it’s a good start.
That example would be based on data gathered by asking two questions: (1) are you currently doing X, and (2) what is your current cost to process a document? Combine the results of those two, and you’re good. Now imagine the question asked a different way. What if it was only asked of companies doing X, and was phrased as “What would you say your ROI was for X?” That’s not quite as good. We don’t have the underlying information to dig into. We don’t know the source of the ROI — did they reduce labor costs, eliminate transaction fees, replace a more costly solution? Now think back to the example of ranges used in a drop-down menu. “Would you say that you saved 10 to 30 percent? 31 to 50 percent? Does 51 to 70 percent sound right?” When we don’t dig down into the source data, and when we’re not building from the ground up to calculate savings, we can go astray quickly.
So what’s the point?
Good question. Here’s my point: the level of detail you demand from sources claiming to represent the ROI of certain investment decisions should be pretty close to what your management team (or board) would demand of you to justify investment. While that’s not always possible, you should read those documents out in the market carefully, asking yourself (and the author) about the items discussed in this blog:
- Where did the data come from? Is this system-reported information or best guesses?
- How were the calculations carried out? Did everyone define the metric the same way? Do those definitions accurately model reality, or do they inaccurately include/exclude elements that skew the results?
- Did the way the questions were asked affect the reliability of the ROI claim? Is the ROI case built from piecing together individual elements to see how they all work together, or did it start and end at a high level?
Aside from catching a few headlines in the short term, I don’t think anyone really benefits from misleading or poorly-constructed ROI cases. If you base a business decision on information that turns out to be inaccurate, neither you nor your company will be happy. If you sell a solution or service based on that kind of ROI case, your customer relationships will suffer and the short-term benefit of the initial sale may be tempered when it comes time for renewals. Especially as we move away from large capital investments and towards software-as-a-service approaches to financial operations problems, customer satisfaction and the paramount importance of customer lifetime value should counsel against the proliferation of unrealistic ROI cases. Or at least we can hope.
Thanks for tuning in,
1. Yes, technically they have other options. They could go by the date printed on the invoice, but that can potentially be gamed by the sender. They could go by the date printed on the envelope if digital postage is used, or by the post office cancellation stamp. I wouldn’t rule those out entirely, but I can’t say that I’ve heard of them as popular or desirable additions to the process.