I was fortunate it my early career by a forward-thinking manager. Back in 1990, my manager allowed me to explore the notion of data analytics to support audit. The manager also had the view that the analytics developed for the audit (e.g., AP analytics) should be handed over to management for their use (e.g., AP manager does continuous monitoring). So, I began my analytics career with the notion that analytics can and should support audit; and how management could use these same analytics to perform continuous monitoring. I should note that my manager also believed that the analytics could be re-run after the audit to assess the impact of management’s actions to address the issues identified by the audit; and that the same analytics could inform the risk-based audit planning process (should we go back and do an audit again?). Unfortunately, I these notions have been challenged throughout my career; and I do not believe many audit organizations have employed analytics, at least not to their full potential. Additionally, the transfer of the analytics to management as a continuous monitoring system has been difficult.
Recently, I developed about one hundred analytics to examine AP process for risk, efficiency, errors, and anomalies. The stated intent of developing these analytics was “audit was performing analytics to support management’s continuous monitoring including risk assessment, fraud prevention, and compliance.” When management was presented with the results they did not like the use of “potential” errors or fraud risk in the management letter and challenged everything. For example:
- Management did not need to recovery monies all identified duplicates (Note: some duplicates had been recovered after the analytics and some of the results were false positives).
- The anomalies had valid explanations
- The identified separation of duties (SOD) examples
They seemed to miss the objective entirely – anomalies were identified because they were anomalies and should be looked at, SOD issues were identified because they illustrated that these issues existed and there was a capability for individuals to perform unauthorized or fraudulent actions, duplicates were happening (even if subsequent recover had taken place) and we had identified the control weaknesses that were allowing them to happen. As a result, management did not accept that they could use the analytics to identify and assess risk, measure efficiency and test controls.
I could not understand their position and could only think of three reasons for their response. One, management was used to auditors delivering a ‘got ya’ report and did not recognize (or believe) that this was an attempt by audit to support and improve management’s risk identification and assessment and review of controls. Two, they did not trust/believe the results of the analytics. Three, they were not interested in performing continuous monitoring.
So, my lesson-learned for any audit organization seeking to develop analytics to support management’s continuous monitoring.
- At the outset of the initiative, ensure that operational management has a clear understanding of the objective of the initiative and how the analytics will be used and reported. If possible, include a someone from the functional area on the team to get their perspective throughout the process.
- Validate the results and explain how the results support continuous monitoring (i.e., an anomaly is not necessarily ‘wrong,’ but it is anomaly. Unusual invoice amounts for a vendor could be an incorrect invoice amount, an invoice assigned to the wrong vendor, a sign of contracting irregularities, or just an unusual amount for that vendor).
- Early in the project, ensure that you are addressing items that are of concern to management (see the #1).
Addressing possible management challenges may still not achieve the desired result, but it will increase the chances of success.
Dave Coderre, CAATS
This article has 2 Comments
Well put. It’s important to use a shared language. “Anomaly” is very serious when it’s, say, referring to launching a rocket powered by thousands of gallons of explosive fuel.
Whatever we name them: anomalies, abnormal items, statistical variations, errors, flagged transactions, hits, or potential risks, it’s essential to keep quantifying and qualifying the impact and suggested actions.
It can be exhausting to keep communicating over and over again, but when the message gets through, the impact makes the business a better place.
Exhausting and frustrating, but necessary. I once was told that the sender of a message is responsible for how the receiver understands and interprets the message. At the time I thought this was insane. Now I know that as a sender I need to put effort into making sure that the message is properly understood by the receiver.