Year 28 – 2015 – Fraud Risk Management Guidance

COSO had released an update to COSO-ERM which included Principle #8 (“The organization considers the potential for fraud in assessing risks to the achievement of objectives.”) related to fraud risk.  David Cotton (Cotton and Company LLP) put together a team of experts to develop guidance on how the audit profession and management could address the requirements of principle #8 and I was fortunate enough to be invited to be part of the team.   In particular, I was co-chair, along with Vincent Walden (EY), of the sub-group on data analytics which was responsible for developing guidance of the use of analytics to assess risk of fraud and to prevent and detect fraud.  I was an interesting and informative task that gave me the opportunity to work with many talented people.  The final guidance “Fraud Risk Management Guide” was published by COSO in 2016.

The executive summary can be viewed at http://www.coso.org/documents/COSO-Fraud-Risk-Management-Guide-Executive-Summary.pdf

The following represents some of my thoughts on the area and served as input to the final guidance document.

Fraud Guidance – Data Analytics input

Data analysis is a powerful tool for assessing fraud risk and for fraud prevention and detection.  But according to an EY 2014 Global Fraud survey: 42% of companies with revenues from $100M – $1B are working with data sets under 10K records; and 71% of companies with more than $1B in sales are working with data sets of 1M records or fewer.  These companies may be missing important fraud prevention and detection opportunities by not mining larger data sets to more robustly monitor business activities.

Data analysis addresses all aspects of the fraud triangle:

  • if people know you are looking, they are less likely to commit fraud
  • Prevent fraud – verify that the key controls are in place and working properly
  • Detect instances of fraud earlier – could catch the first transaction (ACFE 2014 – reported a 50% reduction in duration and a 60% reduction in losses when proactive data analytics were used)
  • Focus the investigation – you know where to look and what to look at
  • Determine losses – reactive; proactive: identify all similar transactions – perhaps at other locations (e.g. payroll fraud)
  • Support the prosecution of people committing fraud – identify the evidence, fully cost the fraud, tell the story

The use analytics supplements the identification and assessment of fraud risk; allows for the monitoring and assessment of controls in areas of highest fraud risk; and supports the detection and investigation of possible fraud.

Fraud Risk Assessment

The ACFE Report to the Nations 2016 stated that proactive fraud analytics can reduce the duration and the loss due to fraud by more than 50%.  In areas of highest fraud risk – analytics can be used to search for control weaknesses and anomalies that could be indicators of fraud.  The Statement on Auditing Standards (SAS) #99 defines various risk factors for assessing the risk of fraudulent financial reporting and other fraudulent acts.   It also encourages you to devise appropriate data analysis strategies for each risk factor.

For example if you are in a competitive Industry, rapidly changing technology can lead to inventory becoming obsolete.  This creates a risk that the inventory may be not be appropriately re-evaluated which would lead to an overstatement on the financial report.  The data analysis to identify and assess this risk factor could include checking the date and results of last inventory evaluation and assessing inventory turnover figures.  If your company has attractive/easily transportable items in inventory, then you are at risk of theft.  Analytical tests could include verifying the effectiveness of the inventory controls by looking at trends in reorder quantity versus use in production or sales and identifying write-off and the use of management overrides to adjust inventory levels.

Fraud Monitoring

In areas of highest fraud risk you should develop a fraud monitoring plan.  The monitoring plan identifies the Why, What, Where and What’s Next of the analysis that will be performed.  For example, if there was a fraud risk that attractive items in inventory could be declared not repairable and written-off as scrap and taken home by employee, we would expect that there would be a separation of duties such that the same person could not be able to declare and item as not repairable and also write-off the item.  Data analysis would be to identify all employees who declared items as not repairable and those who declared items as a write-off.  We would not expect to find the same person on both lists – if we did, we would follow-up to see if their actions were applied to the same item.

Fraud Investigation

When fraud is suspected you need to enhance the fraud monitoring plan and develop a more detailed fraud investigation plan.  The following elements should be documented:

  • Define objectives of investigation. Detail why are you performing the analysis
  • Define the indicators of fraud. Describe what the symptoms of fraud would look like in the data.
  • Identify the required data sources. Working with IT and the business process owner – determine the appropriate source of the required data.
  • Obtain and safeguard the required data. Determine which fields are required – single year or several; one business unit or more; the best methods for obtaining the data; file formats; transfer mechanisms; and how you will safeguard the data.
  • Test the integrity and completeness of the data. Determine the extent to which you can rely on ten data and how you will assess the integrity and completeness of the data.
  • Analysis techniques. Describe the tests to be performed, the expected results and the follow up analyses.

In cases of suspected fraud, the auditor must verify to source or compare with other sources.  When performing the analysis, it is important to drill down into the data – challenging the assumptions and results.

In addition to providing input in each of the chapters – from risk assessment to investigation – Vince and I provided a series of analytical tools and techniques that were presented in an index and are available online.

Year 24 – 2011 – Fraud Detection – part 1

By 2011, I was becoming more and more involved in data analysis to detect fraud.  I had been doing this for years but had never really thought about the approaches I was taking to assess fraud risk and determine the analytics to perform.  The following is the result of my deliberations (which continue to this day).

Fraud Detection

The unrelenting advancement of technology is affecting virtually every aspect of our lives.  And as technology becomes more pervasive, so do schemes to commit fraud. Fraudsters are taking advantage of users’ inexperience with newer technology and weaknesses in the controls to perpetuate these schemes.  This is proving to be a challenge for evaluators, auditors and investigators in their efforts to identify and detect fraud.  However, technology is also a tool that can help prevent and detect fraud. Data analysis techniques can search for the symptoms on fraud that are buried in the millions of transactions flowing through the business process.

Whether you are investing to see if a fraud occurred or following up on an allegation of fraud, a good first step is to understand the ‘why’ of fraud.  The “Fraud Triangle”, created by famed criminologist Donald Cressey, outlines three basic things that must be present in order for fraud to occur: opportunity, pressure or motivation, and rationalization.

Opportunity.  An opportunity is likely to occur when there are weaknesses in the internal control framework or when a person abuses a position of trust.  For example:

  • organizational expediency e.g. it was a high profile rush project and we had to cut corners;
  • downsizing means that separation of duties no longer exists;
  • business re-engineering removed checks and balances in the control framework

Pressure.  The pressures are usually financial in nature, but this is not always true.  For example, unrealistic corporate targets can encourage a salesperson or production manager to commit fraud.  The desire for revenge – to get back at the organization for some perceived wrong; or poor self-esteem – the need to be seen as the top salesman, at any cost; are also examples of non-financial pressures that can lead to fraud.   In addition, living a lavish lifestyle, a drug addiction, and many other aspects can influence someone to commit fraud.

Rationalization.  In the criminal’s mind rationalization usually includes the belief that the activity is not criminal.  They often feel that everyone else is doing it; or that no one will get hurt; or it’s just a temporary loan, I’ll pay it back, and so on.

Interviews with persons who committed fraud have shown that most people do not originally set out to commit fraud.  Often they simply took advantage of an opportunity; many times the first fraudulent act was an accident – perhaps they mistakenly processed the same invoice twice.  But when they realized that it wasn’t noticed, the fraudulent acts became deliberate and more frequent.

Interestingly, studies have shown that the removal of the pressure is not sufficient to stop an ongoing fraud.  Also, the first act of fraud requires more rationalization than the second act, and so on.  As it becomes easier to justify the acts occur more frequently and the amounts increase in value.  This means that, left alone, fraud will continue and the losses will increase.

While I have been unable to find conclusive evidence to support the 10-80-10 rule, but it is well known in the ACFE-world.  Basically, it states that 10% of the people would never commit fraud; 80% might; and 10% are actively searching for opportunities to commit fraud.  I think as auditors and fraud investigators we must be concerned not only with the 10% who are actively attempting to commit but, but also the 80% who might.  By ensuring that the fraud triangle is not adversely affecting these people we can prevent fraud and save people careers and lives.

Pressure – audit can examine corporate performance targets and inform management of times when targets are likely to contribute to cutting corners, bypassing controls and possibly committing fraud.

Rationalization – an audit of corporate value and ethics program and the top-at-the top can help to make sure that the tone-at-the-top is aligned to organizational goals and objectives.

Opportunity – by performing fraud risk assessments and addressing control weakness in the areas most prone to fraud audit can protect the 80% from making a mistake.

Next week I will describe two approaches that can assist you in determining where you have fraud risks and the data you require to perform analytics to determine if fraud is happening.

Year 19 – 2006 – Health Claims

Note: I hope this is like the ACL forum where there are more people reading it, but not posting questions/answers.  While I am enjoying my trip down memory lane – it is a lot of work and it would be a shame if I was the only one reading the posts.  My aim was to encourage discussion and sharing – this is not happening and lessens the value of the blog.  So post a comment, describe your experience, etc.

My early introduction into audit included the concept that audit was an early warning for management (this was before “independent assurance”).  It had the notion of identifying things that were going wrong and making useful recommendations (this was also before the idea of “risk”).  However, my belief was always that audit was there to help; and that the help could and should be offered to all levels of management.  Luckily, I did not see these as incompatible ideals; and to a certain extent so did my managers.

I remember often having discussions over who was audit’s “client”.  We reported to the Board – and they were the main recipients of our reports.  So they were a client.  Senior management also received the reports and responded to the recommendations – so they were a client.  But local management was the group being assessed and had to implement the recommendations – so this made them a client.    The issue was, the three groups had very different motivations and needs.  A high-level report was of little value to the local manager who need to fully understand the “cause” associated with the finding in order to be able to adequately address the issue; whereas senior management and the Board were more concerned with the impact.  Hence the ongoing debate of “who is our client”.

For a number of years, we actually produced three levels of reports.  The local manager detailed report with criteria, condition, cause, impact and recommendations; the management report which focused on the “what does it all mean” (impact and recommendation; and the Board report which presented an overall assessment.  In the end we were spending as much time writing the report(s) as performing the actual audit.

Your thoughts/experience on who is your client and how do you address the needs of your audience?

Auditors are often asked to examine fairly sensitive areas.  This can also mean that you have access to personal information.  Depending on your definition, this could be executive compensation, but in this case (for me) it was health claims.

Continue reading Year 19 – 2006 – Health Claims

Year 18 – 2005 – Quantitative Indicators of Risk – part 2

This is Part2 of an article on developing quantitative indicators of risk to support the annual risk-based audit planning process.

Part1 presented the concept that risk (Probability and Impact) can be measured quantitatively by looking at Complexity and Change (which increase the probability) and Materiality or Volume (which increases the impact).  It also encouraged you to look at more than financial risk.  Part 2 presents examples of indicators of risk and an approach that you can use to develop your own quantitative indicators.

The following are examples of data-driven risk indicators for various risk categories:

  • Financial – an entity that has multiple responsibility centers, a large degree of discretionary spending, and a high number of journal entries and suspense account transactions has a higher level of financial risk than one that has a single responsibility center and primarily non-discretionary spending (e.g. regular salary).
  • Operational – a production plant that has multiple production lines that produce both standard and customized products, requiring changes in the product line, has a higher operational risk than one with a single production line producing a standard product.
  • Legal – an entity that is highly regulated and subject to national, international regulations and has a higher level of ongoing litigation has a higher legal risk that one that is not regulated.
  • Technological – an entity dependent on rapidly changing technology has a higher technological risk that one that has a stable technological environment.
  • Environmental – entity that is highly regulated in an area that is subject to changing environmental regulations, has a lower level of organizational maturity and experience levels of staff, and high costs of non-compliance has a higher environmental risk than one that is not regulated or has minimal non-compliance costs.
  • HR – an entity that spans multiple locations and has full-time, part-time and casual employees – many with very little experience – has a higher level of HR risk than one that operates from a single location and only has full-time employees with many years of experience.

The data-driven indicators are relative – comparing the risk level of an audit entity to other entities (e.g. one activity or region to another).  The result is a data-driven relative risk ranking of each entity on each risk indicator and risk category.   The overall risk for each entity/activity can be assessed by combining the rating for all risk categories.  Thus, audit can identify entities with the highest financial or operational, etc. risk and the entities with the highest overall risk; or assess the effectiveness of risk mitigation efforts on corporate risks.

Data-driven indicators make the risk identification and assessment process easier to update, more responsive to changing levels of risk; and they support an analysis of the source of the risk.  Transactional quantitative indicators of risk can be viewed at any level or slice of the organization.  Auditors can drill down into a corporate risk or risk category to assess and compare every region, plant, division, project, etc.  The risk categories can also determine, for example, what is causing a higher level of legal or strategic risk.  In addition, during the development of the annual risk-based plan or the corporate risk profile, the analysis supports the conduct of more productive interviews with management.  It provides insights that allow auditors to ask questions that focus on the areas of highest risk to the specific audit entity (e.g. “Why do you have twice the number of journal entries and reversals as other financial managers?” or “What are your plans to address both the high existing HR vacancy rate and the large number of employees who are eligible for retirement within two years?”).   This can direct management’s attention to risks that might not have been known previously – making the risk discussion more valuable to both parties.

During the planning phase of an audit, drilling down into the data-driven indicators can focus the audit on specific risk issues (e.g. operational inefficiencies, emerging regulatory changes) or identify best practices.  For example, it is easy to examine the risk indicators for an audit entity to determine the factors causing, for example, HR risk to be high.  This can help shape the audit scope and objectives making the audit more effective and efficient.

Data-driven risk indicators can also be used on an ongoing basis to assess the risk associated with specific corporate initiatives (e.g. a proposed merger or acquisition) on all categories of risk not just financial.   For example, a quick assessment of the HR risk factors could identify emerging HR issues (high turnover and eligibility for retirement rates) in a company where a merger is being proposed.  Data-driven risk indicators can also highlight financial risks related to the proposed merger company’s current financial management control framework including highlighting a different financial management framework which may negatively impact the merger.  In addition, a potential merger’s risk indicators can be compared to previous mergers (successful and unsuccessful) to determine the relative risk and areas of highest concern.  This would better inform management decisions and risk management activities.

To support the risk-based plan, the identification of potential data-driven risk indicators should be considered for each corporate risk and for all risk categories.  Auditors should work with the Chief Risk Officer and subject matter experts to examine the risks; identify drivers that affect the risk; and develop data-driven indicators for each risk driver.   Table 1 is illustrative of the process to identify data-driven risk indicators for HR.  The same process can be used for each risk category (finance, legal and regulatory, etc.).  The first step is to define the sub-categories of risk (e.g. recruitment); then the associated risk drivers (e.g. lack of resources); and finally the data-driven risk indicator (e.g. increasing number of vacant positions).

Table 1 – Development of HR Risk Category Indicators

Risks Risk Driver Data-Driven Risk Indicator
Recruiting – failure to attract people with the right competencies. ·  Lack of resources

·   Lack of skilled employees

•    Vacancies

•    Acting appointments

Resource Allocation – failure to allocate resources in an effective manner to support the achievement of goals and objectives. •   Inappropriate resources for tasks

 

•    Employee type (full-time, part time, seasonal, contractor, etc)

•    Employee classification

•    Employee status

•    Unions

Retention – failure to retain people with the right competencies and match them to the right jobs. •   Demographics

•   Low experience levels

·   High turnover

•    Years of pensionable service

•    Average age

•    Average years in position

Work environment – failure to treat people with value and respect. ·   Unhappy workforce

·   High sick leave

•    Average sick leave/vacations

•    Percentage departures

 Once identified, the data-driven risk indicators should be categorized as indicators of volume, variability/change or complexity.  The same process would be performed on the other risk categories.

Since each risk category (finance, HR, legal, etc.) will have several risk indicators related to each of volume, variability/change and complexity, determining the overall risk for each audit entity will be difficult to do manually.  For example, you could have 7-8 risk categories (finance, HR, operations, legal, technological, etc.); with 5-10 risk indicators for each of volume, variability/change and complexity; and 20-50 audit entities for the annual risk-based audit plan totalling 700 – 4,000 risk measures.  However, the details allow you to look at risk from an overall, a risk category or even a risk factor perspective.  For example, you could easily determine that Entity A has the highest overall risk score, which is due to high risk scores in Finance, Operations and HR.  The HR risk is being driven by high variability (employee turnover and percentage eligible for retirement) and the finance risk is due to the complexity of the financial framework.  This will inform the planning phase of the audit of Entity A.  A similar analysis can determine which audit entities are having the largest impact on corporate risks.

While the details provide information to support the planning and conduct of an audit, the risk-based plan needs a higher level view of risk.  The solution is to develop a single composite data-driven risk score for each entity which includes all risk categories.  This is a multi-step process, the first of which is to develop a single risk factor score for each of volume, variability/change and complexity for each risk category; second, consolidate the risk factor scores into a single risk category score for each risk category (finance, HR, operations, etc.); and third, consolidate the risk category scores into an overall risk rating for each entity.

The data-driven risk ratings can be used to rank entities based on based on their overall risk.  In addition, qualitative and auditor judgment factors can now be included to arrive at a final risk rating. The final results can be sorted by risk ranking and audits assigned based on availability of resources.

The identification and assessment of data-driven key risk indicators can be accomplished easily and with minimal investment.  A data-focused approach will allow internal audit to identify issues, target risks and allocate resources more effectively.  It will support professional auditor judgment and make the annual risk-based audit plan more defensible, easier to update, and backed by quantitative and qualitative factors.  The data-driven risk indicators are useful during the interview process, aid the planning phase of individual audits, and can be used to keep the annual risk-based audit plan current.  The risk indicators can also be used to update corporate risk profiles, and assess the effectiveness of risk mitigation strategies and the risk associated with new strategic initiatives – providing valuable advice to senior management on all categories of risk.  Audit functions that leverage a quantitative, data-driven approach to identifying and assessing risk, are more relevant to the business and can provide more efficient and improved risk coverage to senior management and the Board.

 Examples of HR data-driven risk indicators

Volume / Size

·          Number of employees

·          Total dollars of payroll

Variability/Change

·          Average age

·          average age of senior managers

·          Average years of pensionable service

·          % of employee who can retire in least than 2 years

·          Experience – years in dept / position / classification

·          % fulltime employees

·          % positions affected by org change in last year

·          % employees in acting assignments

·          % new hires (within last year)

·          Total leave taken

·          Average sick leave taken

·          Average vacation leave take

·          Average unpaid leave taken

Complexity

·          # types of employee

·          # classifications of employee

·          # geographic locations

·          # unions

·          % employee with non-standard hours

Other

•       % by Gender (M/F)

•       % First Official Language (Eng/Fr/Sp/etc.)

Examples of financial data-driven risk indicators

Volume

·          Total Expenses

·          Total Revenue

·          Total Assets

Variability/Change

·          Percentage of discretionary spending

·          Percentage of expenditures in Period 12, 13+

·          Total and number of JVs

·          Total and number of suspense account transactions

·          Total and number of Reversal documents

·          Total and number of Losses

·          Percentage of A/P transactions paid late (> 30 days)

·          Percentage of A/R transactions more than 30 days overdue

Complexity

·          Number of Cost centres

·          Number of General Ledger accounts

·          Number of Foreign Currencies,

·          Number of Document types

·          Use of Internal Orders

·          Use of Purchase orders

·          Use of Fund reservations

·          Use of Materiel and Asset numbers

·          Use of Real estate blocks

·          Use of Work Breakdown Structure

·          Number of Employees

·          Number of P-Cards

ACL Commands: TOTAL, STATISTICS, CLASSIFY, EXPRESSIONS, and RELATE.  While the process used scripts to perform all the analysis, the commands were basic – such as Total Age 1 “Number_Emps” and then calculating the average age (Age / Number_Emps).

Lessons-learned: the analysis was extremely useful – particularly when discussing risks with managers.  We have the risk measures for each audit entity and could ask pointed questions of managers of projects or activities or ask senior managers about emerging areas of risk based on a comparison of previous years’ data.

“Built it and they will come” – is sometimes true, but I found that I often had to educate the auditors on how to review the results and drill down into the details to better understand the source of the risk.  To me it seems obvious – because I view a business process or activity from the data perspective – but this was not the case for all the auditors.  They would have a financial, HR or environmental lens and couldn’t see how the data helped.  Fortunately, with assistance, some were able to understand what the data was telling them about the entity/activity/process.

Using data-driven indicators of risk we were able to update the RBAP on a quarterly basis in hours.  This allowed us to ensure that we were dealing with the highest areas of risk and to identify emerging areas of risk early.

Year 18 – 2005 – Quantitative Indicators of Risk – part 1

This was my first attempt at identifying risk to support the development of the annual risk-based audit plan (RBAP).  I have been involved in the development of the RBAP – even responsible for it – over the years and always felt that it was more professional opinion than anything else.  Some people built a spreadsheet with weighting factors 1-5 and fooled themselves into believing that there is a logic and quantitative underpinning to the RBAP, but in the end, the auditors are providing the weighted scores based on professional opinion.

My approach was to use data analytics to support the qualitative aspects of the plan (auditor judgement, interviews with managers, previous audit results, etc.).  This was for two reasons: first, quantitative indicators are easier to update; and second they provide assurance that we were also considering emerging risks.

Below is part 1 of an article I submitted to the IIA magazine.  It was not published because they did not consider it to be “relevant to internal auditors” (????????), despite the fact that the IIA standards call for a continuous risk assessment.  I think that the reviewers didn’t understand the ease and utility of developing the data driven risk indicators.  I hope you find the article useful.

Developing data-driven indicators of risk to support the ongoing assessment of risk – Internal auditors face a daunting task of identifying and assessing risk.  The results of this activity are critical as they serve to ensure that scarce audit resources are being expended on activities that best address the risks identified by senior management.  The initial assessment of risk typically includes reviews of the corporate risk profile, business plans, financial statements, previous audit reports, and interviews with senior managers with question such as “What keeps you awake at night?”. The process can take weeks even months to complete.  Contrast this with the IIA standard #2010 which states that the chief audit executive must review and adjust the plan as necessary, in response to changes in risk, operations, programs, systems and controls and you can see where audit has a problem.

Continue reading Year 18 – 2005 – Quantitative Indicators of Risk – part 1

Year 12 – 1999 – Part 1 – Data analytics to assess risk

Wow – never realized how much work this would be.  I mean, I am only posting once a week – but it still takes a lot of time.  Not getting many comment, but I hope people are enjoying and learning from the posts.  I had hoped more people would share their experiences so we could learn from each other.

I was now interested in expanding my use of data analytics beyond testing of controls.  There were numerous times when I had identified control weaknesses that were fraud risks and a number of times where we actually from a fraud occurring.  This led me to the development of my third book: “Fraud Detection: Using Data Analysis Techniques to Detect Fraud” in 1999.   The text included theory and numerous cases studies which illustrated how ACL could be used to identify symptoms of fraud in the data.  Examples such as STATISTICS on Receipt_Qty to find a receiving clerk fraud were included.

Once again, ACL agreed to publish the text and it received a favourable review from both the audit and investigative communities.  It is still in print and people tell me that it has helped them with the fraud analytics.  One expert from E&Y told me that he using it with clients to takes about fraud risks and they usually go from “No fraud here” to “we really need to set up a proper fraud risk assessment and monitoring program”.

As I mentioned previously, our company has just implemented several ERP systems.  In particular, we were using SAP for our financial system.  About two years ago I had performed a test of the A/P process and had found a number of issues.  Management’s initial concerns centered on possible duplicate payments and paying invoices early without the discount or paying them late and incurring late penalty charges.   Keep in mind the fact that interest payments in the late 1990’s were much higher than today – can’t remember for sure but probably closer to 10%.  Also, I could have posted this in 1996 and 1997, but the lessons learned applied to 1998 so I am posting now.

Continue reading Year 12 – 1999 – Part 1 – Data analytics to assess risk

Year 10 – 1997 – The importance of data

Even now, I firmly believe that the potential for the Y2K disaster was real.  The only reason that its effects were minimized was a result of the hundreds of thousands of hours spent checking and rechecking programming code to address the “00” year problem before it occurred.

For those of you too young to remember, prior to the year 2000, many databases and applications only used two digits for the year, so “10” was “1910”.  This was initially because of the high cost of storing data.  Storage space was expensive and read/write operations slowed down the processing speeds.  As a result, dates were often stored with only a two digit year (e.g. 032155 or 08055 (in DDDYY format)). Why store “1955” when “55” was sufficient and saved two bytes of space and reduced the read/write time.  However, with the coming of 2000, the extra two digits would be important.  A year stored as “01” could be “1901” or 2001”.   While this could be critical particularly in the financial world where interest and other calculations require date fields, financial transactions were not the only concern.  Many programmers, myself included, had learned to build error traps and exit routines that used code such as If Date = “00” then exit.  Many of these programs were still in existence and the year would soon be “00”.  This could cause critical programs to exit or execute error routines.  Concerns ranged from VCRs not working to planes dropping out of the sky and nuclear plants exploding.

Continue reading Year 10 – 1997 – The importance of data

Year 9 – 1996 – Promoting CAATTs

I had been writing articles for the Internal Auditor (IIA) and other audit-related magazines for several years now, but I wanted to do more to educate and encourage auditors in the use of analytics.  One day I realized that if I assembled all of my previously published IIA articles, I had about 50% of the content necessary for a book on analytics.  So I started developing an outline and writing more content.  It took about six months to combine what I had and write the other 50% and the result was “CAATTS and Other BEASTs for Auditors”.   The book was published by Global Audit Publications, the publishing arm of ACL Services.  It was their first publication – other than software manuals.  Now I was a published author.

CAATTs and Other BEASTs explained how various types of software – from word processing to data analysis – could be used to support the planning, conduct and reporting phases of the audit.  It was well received by auditors who were looking for guidance in the use of analytics; and I was encouraged to write more articles and even another book (but not right away).  Even though it had a limited audience, the final sales total, after several years, was over 5,000 copies.

The next audit I supported was an environmental audit of hazardous materials.  The objective was twofold: ensure that hazardous materials were properly stored and disposed of in accordance with environmental laws and regulations.  At the beginning of the planning phase, I asked the auditors where they were going (i.e. where would the onsite audits would be conducted).  They told me that they were going to three large sites (one on the east coast, one in central region and the other on the west coast) and three smaller depots close to the large warehouses.  They explained that this would ensure all regions were covered and that small and large sites were audited.  Sounded good, but based on my analysis, one of the large sites and two of the smaller ones did not have any hazardous materials.  This wouldn’t make for a very good audit.

Continue reading Year 9 – 1996 – Promoting CAATTs

Year 8 – 1995 – HR analysis

Our analytics team was running on all cylinders and achieving significant results.  There was not just my opinion, we received an ISACA Award of Excellence at the Info Tech Audit ’95 conference for leadership and contribution to IT Audit Community.  Amazingly, it was a $1,000 cash award.  The team (3 people) went for out for a celebratory dinner and donated the remaining funds to a local charity.

By how we had a steady stream of auditors seeking data extractions from 30+ information systems.  We had standard monthly extracts in place for the major systems (8-10) that we accessed on a regular basis; and we were able to handle one-offs fairly well.  We still heard the usual arguments from IT (you do not have authority to access the data, it contains personal info, you don’t have the security, etc.) when we sought access to a new system, but we were getting better at countering their arguments with solid facts and obtaining the necessary access.  The more difficult issue was changes to existing applications.  We were not informed when things like record layouts, file names, transaction types, etc. changed.  This meant that we had to constantly be verifying the integrity of our standard extracts and scripts that we had developed.

To date we had used the personnel data to verify pay rates as part of a payroll audit; to determine personnel costs for a cost recovery audit; and for a number of other audits that required HR information.

The first HR audit to use data analysis was an audit of an employee reduction program.  The company was downsizing and eligible employees were being offered a buyout package.  The package was made available to full time employees and the buyout was based on years of service (including casual or part time employment) and current salary rate.  The initial audit objective was to determine if the buyouts were for the correct amount.

Continue reading Year 8 – 1995 – HR analysis

Year 7 – 1994 – Transfer of Audit Analysis to Mgt

Having been a member of the IIA since 1990, I always looked forward to the Internal Auditor magazine.  However, it rarely included articles on computer-assisted audit tools and techniques (CAATTs).  I wrote the first of several articles on data analytics “Computer-Assisted Audit Tools and Techniques: The Power of CAATT is turning up the ‘can-do’ potential of some audit shops”.  It was published in February 1993 and, to my knowledge, was the first time that the acronym “CAATTs” had been used anywhere.  Previously there was only one “T” as in “CAATs” but it was never clear to me if the “T” stood for tools or techniques – having two “Ts” solved that issue.  I also proposed the establishment of a regular column called “Computers and Auditing” and I, along with James Kaplan, were the first co-editors.  The column, replacing “PC Exchange”, debuted in February 1994 with my first column “Auditmation” which included four different audits where analysis had been used.  James and I were co-editors for several years and I wrote many columns on data analytics and audit.  These became the base=is for a book I wrote several years later.

At my regular job, I worked on a repair and overhaul audit.  Our company had contracted out the maintenance of specialized equipment to several vendors.  As per the contract terms, the vendors were required to maintain an inventory of critical parts for which we paid the storage costs and the purchase price when used.  The vendors were also loaned specialized test equipment to be used for repairs and testing of our equipment.  The audit objectives included the verification that the vendors were complying with the terms and conditions of the contracts.

Continue reading Year 7 – 1994 – Transfer of Audit Analysis to Mgt