What could be worse than not having an analytics capability? Having an analytics capability, but not being sure what to do with it! This means that you have invested in developing analytics to access your business systems, but now are unsure about:
- Which analytics do I run?
- How often should I run them?
- What do the results mean?
- How do I verify the results?
- How do I deal with false positives?
- What do I do with the exceptions?
- How do I find the time to run the analytics?
These are but a few of the conundrums that you will be facing after developing a series of analytics. Perhaps not what you imagined when you started to develop analytics. You might have thought that building the analytics would solve all your problems – not create more.
As is usually the case, the answer to the above questions is “Depends”. When/why/how often will be closely linked to “Why”. What was your intent when you developed the analytics. So, if you did not have clear objectives for the analytics before – you will have to develop them now. Without knowing what you want to accomplish through analytics, how will you know if they are accomplishing what you set out to do or not.
I have developed hundreds of analytics across numerous business processes (payroll, accounts payable, accounts receivable, p-cards, fixed assets, etc.). In many cases, the analytics were highly successful – at least initially. In other cases, they failed to get out of the starting gate.
Before building the analysis you should have a clear idea whether the tests are examining risks, controls, compliance, efficiency or leading and lagging trends. Are the analytics tied to a business process (accounts payable) or do they cut across business function (e.g. cyber security risk). Are you assessing mitigating controls to measure risk or are you examining a business process for inefficiencies. Not only will the test vary, but so will the frequency that they are run.
Analytics to assess a business process’s efficiency may only be only or twice – first to identify the inefficiencies and later to see if the changes have resulted in a more efficient process. Testing critical controls may be run monthly or even more frequently. Compliance may be a quarterly assessment and risk analytics may be continuously run to identify emerging and changing risk levels.
The results can indicate a control failure, changing levels of risk, failure to comply, and backlogs or blockages in a business process. They may require immediate action, further assessment, and a tracking of the measures over time to determine the trend line. The results can not only address the issue, but may also require correct actions to address historical errors. Identifying a control failure (e.g. duplicate invoices) should trigger an improvement to the underlying controls, but may also involve action to recover the duplicate payments. Uncovering a control weakness that represents a opportunity for fraud to occur may only require modifications to existing controls. And changes in risk levels may demand a different mitigation strategy or, if still acceptable level of risk, simply the monitoring of the risk level.
Now that you have addressed: the why, when, how often and what to do with the results, the last question may be “How can I maintain and run the analytics on an ongoing basis?” Typical, analytics is an additional task placed on finance or internal audit. While it can and does produce valuable results, it can detract from the primary focus of these areas. As a result, the utility of the analytics decreases over time. New analytics are not developed, and existing analytics fail to keep pace with changing business processes and risk and control environments.
Many organizations are outsourcing the development, maintenance and running of analytics. Data analysis experts can bring cross-industry experience, expertise in the analysis and interpretation of results, and the skills to develop, maintain and enhance the analytics.