Statistics

Sometimes even the simplest analytics are extremely useful.  They often tell you what is happening in the detailed transactions without you having to make assumptions about the data.  STATISTICS is one of those commands: with a couple clicks of the mouse you can get:

  • Min and Max
  • Highest and Lowest ‘X’ values
  • Number of Positive, Negative and Zero valued transactions
  • Average
  • Median, Mode and Q25 and Q75

But like any analysis, you need to know why you are running the analysis and you need to be able to interpret and understand the results.

Why run STATISTICS?  I run Statistics to get an overview of my data.  It helps me to understand the nature of the transactions, to verify my scope, to highlight anomalies, and much much more.  Here are a couple of examples:

  • Count
    • I use this to verify the control totals. In one case I found that my Total for a critical field was out by a significant amount.  It turned out that the numeric fields had 3 decimal points (implied) and not two as I was told.  Without verify my control total before doing additional analysis, I would have wasted a lot of time.
  • Min/Max values:
    • All transactions should be in the previous fiscal year. The Min value should be less than or equal to the start of the fiscal year and the max should be greater than or equal to the end of the fiscal year.  I found billing transactions that were in the wrong fiscal year.
    • All credit card (p-card) transactions should be less than $500.00. Not only did I find amount greater than $500, but I also found numerous credits being processed – which led to a fraud discovery.
  • Highest/Lowest values:
    • The Maximum value was significantly higher than the second highest value. It ended up being an error where the invoice number was entered into the amount field – making the amount much larger than it should have been.
  • Number of Positives/Negatives and Zero values
    • Found hundreds of thousands of transactions with zero values which led to a training issue. A/P clerks could use bulk entry forms for simple invoices.  The form automatically generated a blank invoice entry form with 100 lines.  If the clerk did not type “Cntl –“ before saving the entered invoice details, the blank (unused) lines remained and were processed as invoices with zero value.  This was causing the system to run slow and generating exception reports.
    • Found several hundred Negative “quantity received” amounts. The Receipting clerk at a warehouse was entering negative receipts after entering the correct receipt amount.  Invoice was paid based on the correct receipt amount; but the negative receipt entry (without an invoice number) reduced the quantity in inventory and allowed her to steal the items.
  • Mode value
    • This identified a fraud where an employee was entering meal charges when not on travel status. Amounts under $10.00 did not require approval.  The employee entered hundreds of meal claims at $9.99 which was pickup by the mode option.
    • The most often occurring date was just before bonuses were determined. Salespersons were entering a large number of fictitious sales on the day before performance bonuses were calculated.  The sales were reversed at later dates.
    • The most occurring amount for contracts was just under the “must go competitive” value. Managers were establishing contracts at $49,900 – which was below the $50,000 limit for sole-sourced contracts.

In general, STATISTICS lets me see what the data looks like from a high-level perspective.  However, it also can quickly identify anomalies (e.g. negative when all transactions should be positive; values outside the expected range; unusual maximum values, etc.).  .  This helps with determining risk, sampling strategies, identification of control weaknesses, materiality amounts, etc.  When combined with STRATIFY, I can get an overview of the ranges (or strata) of the data which provides even more useful information.

CLASSIFY with Statistics Option

I have been using ACL software for more than 25 years – back before the Windows version even existed.  It wasn’t the first audit software that I had used, and I was pretty sure it wouldn’t be the last audit software I would use.  Previously I had used AutoAudit, Audit Analyzer, Decision Analyzer, IDEA, ReportWriter, Easytrieve and various spreadsheet packages (Mulitplan, Visicalc, and Lotus1-2-3).  I told myself that I would give ACL a try until it couldn’t do what I needed and I found another package did.  As I mentioned that was more than 25 years ago – and I am still putting ACL through its paces and looking at other software.

Sticking with a particular software product for this length of time has its advantages.  You build up a certain level of expertise (mainly by trying, failing and trying again).   I must have about 10,000 scripts to do various analyses that I have performed in the past.  However, it also has a distinct disadvantage: you have to keep up with the new functionalities being released.  This has become more of an issue of late as major releases seem to be occurring every six months.  I still have not spent the time and effort to understand the REGEX() and other REG-type functions.  The new interfaces and ODBC connectors, Python and R, even EXECUTE (A return of the RUN functionality) are challenging me to keep abreast with the additional power and flexibility of the AN product.

In addition, if you throw in AX and GRC – with working papers, exception reporting and monitoring, storyboarding, etc. – I am hopelessly behind in my use of ACL.  Fortunately, I have retired – well at least semi-retired – and can rest on my laurels.  I only take jobs that require knowledge of AN and in areas like REGEX() I can post to the Peer Community and get instantaneous help from around the world.

In 2001, I published a book called the “Fraud Toolkit” which I updated in 2009 under the title “Fraud Analysis Techniques Using ACL”.  The books included more than 50 scripts to search for fraud and a self-study course on scripting with data and exercises.  At the time, certain functionalities did not exist with the ACL command set, so I built scripts to perform the desired analysis.  My scripts included: Crosstab, Bendford Analysis, Frequently Occurring Numeric Values, and Min/Max Ratio analysis.  Since then ACL has added these functionalities.  Crosstab – added 2010; Benford analysis – added 2010; Frequency of numeric value (Classify on a Number) – added 2015; and Min/Max ratio – added 2015. (Note: some of the dates may be incorrect – hard to remember exact versions when ACL commands were added, but it was years after the toolkit was developed).  Many of you are aware of Crosstabs and Bendford commands in ACL, but you may not have recognized the addition of the Min/Max ratio.

The Min/Max ratio was accomplished by a script that determined the minimum, maximum and average value for any unique value of a character field.  For example, the minimum, maximum and average value of credit card transactions for each credit card number.  The script was not particularly complicated, but it did involve the use of GROUP-END logic.  Recently (2015) ACL added the ability to find the Minimum, Maximum and average values when you perform Classify, Summarize and Stratify.

This is the functionality I would like discuss, with examples, today.  More than 15 years ago I thought it was a useful analysis and devoted hours to developing my Min/Max script (I also had a Max/Max2 script).  Now it is available to all users, but like me, you might not be aware of the functionality of all enhancements.

In my first example, I will look at invoices paid to vendors.  In my case, the file I was working with was fairly small – less than 18,000 invoices paid to 6,000 different vendors.  As in the past, I performed various analyses such as STATISTICS to find the min, max and average invoice amount; STRATIFY to look atthe ranges of invoice amounts; SUMMARIZE to identify the vendors with the largest number and total dollars of invoices; and of course DUPLICATES.  All of these commands have provided enormous value and help me to understand our business and the A/P process.  For example, STRATIFY showed me that 63% of the invoices were for amounts less than $500.00 and totaled less 0.5% of the total amount.

Stratify on Amount

This caused me to question the efficiency of our p-card program for small dollar purchases (e.g. under $500).  It also show me that a small number (59) of invoices were for large amount (>$1M) and accounted for 83.47% of the total dollars – guess which invoices we were going to review?

But the addition of the Statistic option in the CLASSIFY command provided insight into the 6,000 vendors.  This was something that was not available before, unless you had my Min/Max ratio analysis script.

Classify on Vendor with Statistics Option

There were 13 vendors where the total number of invoices was greater than 10 and the amount of the maximum invoice was more than 75% of the total amount of invoices.  In other words, one invoice was significantly larger than the other invoices for the same vendor.

Looking at the second vendor (#241380), the largest invoice amount is $76,550.00; the average amount is $4,759.63: and the maximum invoice is 89% of the total for that vendor and more than 16 times the average amount.  Upon review we determined that the actual invoice amount was $765.50 and the amount had been entered incorrectly.  The same was true for vendor #301549 with a maximum amount of $16,975.00 (78% of the total) which should have been 1,697.50.

Vendor #333270 was a different problem – the $94,333.09 was actually owed to vendor #333207.  When this vendor complained that they had not been paid, we paid them with a late payment interest penalty.  The system did not recognize it as a duplicate because the vendor numbers were different.

The three vendor results below represent another type of problem.  Either we paid the wrong vendor or paid the wrong amount – but the vendor returned the check.

 

In each of the above cases, you can see that the minimum ABS(minimum amount) = maximum amount e.g. Vendor #276053 had a minimum of ($103,000.00) and a maximum of $103,000.00.  The vendor was not owed $103,000.00 and returned the check.

I hope that you can see the value of the Man/Max functionality.  But just to make sure, let me give you some additional examples of where I have used it:

  • Payroll – Classify on Job Category Subtotal Amount – identified people in job categories that were being paid incorrect (inflated) amounts (payroll fraud)
  • Medical Claims – Classify on Procedure Code Subtotal Amount – identified doctors who were charging 2-3 times the normal amount for medical procedures (medical claim fraud)
  • P-Card – Classify on Card Number Subtotal Amount – identified cardholders with a single large transaction (cards had been compromised)
  • P-Card – Classify on cardholder position Subtotal Amount – identified unusual transactions for admin clerks (a clerk was using the p-card for personal purchases)
  • Payroll Overtime – Classify on Job Category Subtotal Amount – identified categories where employee were not entitle to, but were receiving, overtime and individuals who were making false overtime claims
  • Contracting – Classify on Contract Type Subtotal Amount – identified sole source contract that should have been competitive
  • Contracting – Classify on Supplier Subtotal Amount – identified cases were certain suppliers received large contracts at year-end – bypassing procurement controls
  • Journal Vouchers – Classify on Period Subtotal JV Amount – identified abnormal JV’s just prior to production of quarterly financial statements.
  • Vacation – Classify on Employee Type Subtotal Vacation Days – identified an employee who coded 40 hours of vacation as 5 (i.e. days) and the system only subtract 5 hours from his vacation balance. Also identified people who were claiming large amounts of vacation.  System controls were not working effectively and employee could take more vacation days than they had in their balance (i.e. ended with a negative balance).

So, even though I had not kept up worth the enhancements offered by the more recent versions of ACL, I encourage you to do so; and I hope you can see the benefit of the Statistics option on Classify and/or Summarize.  I also hope that you post examples of where you have used this command parameter.

Extract Records Matching Criteria in another File

A common problem that presents itself when doing analysis is the need to extract detail records that match a list of values (criteria) in another file. This can range from simple to more complicated, primarily depending on the nature of the match.  The following describes three approaches, based on the objective of the analysis.

Simple question:

A simple question of this type is where we have a file with detailed transactions that we want to compare to a file that has a list of values (e.g. Payroll).  If we have a file of all pay transactions (details) and a list of employees (Employee file) who should be paid (criteria) and we want to know if every pay transaction has a matching employee record, then you can do a JOIN or RELATE with the detailed payroll transactions as your Primary (or Parent) file.  ACL allows you to do a many-to-one JOIN or RELATE.  This will give you all of the detailed payroll transactions where there is a matching employee (Matched Primary); or you can get all of the payroll transactions where there is no matching employee (Unmatched Primary).  Finally, you can also obtain both matched and unmatched (All Primary) – which is what the RELATE will give you as well.

If your results contains both match and unmatched pay record, to determine whether you have a matched record or an unmatched record in your results, simply apply a filter.

First to identify the matched records: this is the case where the employee number from payroll details (EmpNo) has a match with an employee number (EmpNum) from the employee file.

  • In JOIN it would be the EmpNo is the same as the EmpNum – filter: Empno = EmpNum
  • In a RELATE it would be similar, but since the Parent and Child files are physically separate, we need to include the table name in the filter for child fields – filter: EmpNo = Employee.EmpNum

Next to identify the unmatched records: this is the case where the employee number from payroll details (EmpNo) does not have a match with an employee number (EmpNum) from the employee file.

  • In JOIN it would be the EmpNo does not have a match with an Empnum – filter: Empno <> EmpNum
  • In a RELATE it would be similar, but again we need to include the table name in the filter for child fields – filter: EmpNo <> Employee.EmpNum

More Complicated Question:

The previous example is a many-to-one match.  What if the Secondary or Child file has duplicates records (records with the same key value).   For example, the Employee file had not only the employee number of everyone who should be paid, but also the rates of pay for different periods of time (From-To dates).  So an employee could have multiple records if their rate of pay changed.

The employee file might look like this:
Empnum        Pay rate           From Date          To Date
12345             1,200.00              Jan 1, 2016           Jul  31, 2016
12345             1,225.00              Aug 1, 2016          Dec 31, 2016
12345             1,250.00              Jan 1, 2017           Dec 31, 2017

The pay file might look like this:
   EmpNo                 Pay Amount               Pay Date
12345                    $1,200.00                      Jul 15, 2016
12345                    $1,225.00                      Aug 15, 2016
12345                    $1,225.00                      Sep 15, 2016
12345                    $1,250.00                      Oct 15, 2016
12345                    $1,250.00                      Nov 15, 2016
12345                    $1,250.00                      Dec 15, 2016

The objective might be to find not only people who were paid and should not have been (as in the previous example), but also people who were not paid the correct amount.  This means we have to match the pay details not only with the correct employee record (EmpNo = EmpNum) but also the pay period with the correct From-To dates.  That means that were have a many-to-many match.  We cannot use RELATE since it can only perform a many-to-one match.  JOIN, however, does have a many-to-many option.

So we would perform a JOIN where the pay transaction detail file is our Primary file and the Employee file (with the pay periods and rates) is our SECONDARY file.  A possible issue with a many-to-many JOIN is that the resulting file can be quite large.  For example, If I had 10,000 employees each with 26 pay records (260,000 detailed records); and an employee file where each employee had at least 10 sets of pay rates (10,000  * 10 = 100,000 employee records) and performed a JOIN many-to-many, the resulting file would have 2,600,000 records (260,000  pay records * 10 pay rate records per employee).

You have two options: perform the JOIN and then filter the results to find pay records where the Pay Date falls within the From-To Dates; or apply the filter to the JOIN command.  If the resulting file is not going to be overly large, I suggest the first option: perform the JOIN to a temporary file; apply the filter; and EXTRACT to your final results file.  However, you can also apply the filter directly to the JOIN command.  In either case, the filter will isolate the detailed pay transactions for each employee that has a matching record in the Employee file AND the Pay Date falls within the From-To Dates.

Temporary file: Filter IF BETWEEN(Pay_Date, From_Date, To_Date); and then EXTRACT to the final results.

Filter applied to JOIN command: JOIN MANY PKey EmpNo SKey EmpNum IF BETWEEN(Pay_Date, Employee.From_Date, Employee.To_Date) – note: you must include the table name for the secondary fields.

With either approach, the results will contain pay records where the matching pay rate is included.  You can now filter Pay_Amount <> Pay_Rate to identify people who were paid an incorrect amount based on the pay day and pay rate for that period.

Even More Complicated Question:

In essence, the “more complicated question” was still fairly simple because we were performing the JOIN on field that equalled the value in the Secondary file (i.e. where the key fields were equal: EmpNo = EmpNum).  It becomes “even more complicated” when the two files do not have key fields that match.  For example, I want to identify all detailed transactions where the description field contains one or more of list of keywords in my exceptions file (e.g. all P-Card transactions where the Merchant name contains a keyword such a bar, hotel, spa, casino, etc.).  I can’t perform a JOIN where Merchant Name equals the keyword value because they are not identical (“Dave’s Bar and Funspot” won’t match with the keyword “Bar”).

Instead I need to EXTRACT the records where the merchant name contains a keyword.  To do this manually, I would:

  1. Open the Keyword file
  2. Find the first keyword value (“Bar”)
  3. Open the P-Card transaction file
  4. Apply the filter FIND(“Bar”, Merchant_Name)
  5. EXTRACT all the records that meet the filter to a Results file
  6. Open the keyword file
  7. Get the next keyword value (“Hotel”)
  8. Open the P-Card transaction file
  9. Apply the filter FIND(“Hotel”, Merchant_Name)
  10. EXTRACT all the records that meet the filter to a Results file
  11. Continue until all keyword values have been selected and the filter and extract performed

Simple if you have a short list of keywords, but what if you have 1,000 keywords?  Why not have ACL do the keyword value selection, apply the filter and perform the extract – over and over again until all keywords have been applied?

The most common approach is to use a main script that calls a subscript which applies a filter and performs an extract of the records that meet the filter.  So if I have four keywords (bar, hotel, spa, and casino), the main script will call the subscript four times.  The first time the subscript will apply the filter FIND(“Bar”, Merchant_Name) and EXTRACT all the records that meet the filter to a Results file.  It then gets the next keyword value (“Hotel”) and calls the subscript again, applies filter FIND(“Hotel”, Merchant_Name) and EXTRACTs all the records that meet the filter to a Results file.  It continues to get keyword values until all the keyword values have been selected and the EXTRACTs performed.

Question #1: How many times does it call the subscript? Answer: as many times as there are keyword values – which can be determined by performing a COUNT on the keyword file.

Question #2: How does it get the 2nd, 3rd, and 4th keyword? Answer: by using a counter (v_Rec_ctr) and locating the record that equals the counter (LOCATE RECORD v_Rec_ctr).  The Counter is incremented each time the subscript is run (v_Rec_ctr = v_Rec_ctr + 1).  Assuming the P-Card file is called PCard_Trans and the Keyword file is called Key_Words which contains a field called Keyword_value (i.e. has “Bar”, Hotel”, etc.), the script looks like this:

***** Main script Extract_PCard Recs1 *****

COMMENT **** close all tables and delete previous results *************
CLOSE PRIMARY SECONDARY
DELETE PCard_Results.FIL OK
DELETE FORMAT PCard_Results OK

OPEN Key_Words

COUNT – counts the number of records in the Key_Words file (i.e. number of keyword values)
v_Num_Keywords = COUNT1   – the number of times to run the subscript
v_Rec_ctr = 1   – this tells me which record in the Key_words file to read
v_keyword = BLANKS(30) – initializes variable to store the keyword value from the Key_words file – make sure you have enough blanks to hold the longest keyword value

DO Extract_PCard_Recs2 WHILE v_Rec_ctr <= v_Num_Keywords – executes the subscript until all keywords have been read.

OPEN PCard_Results – open final results file after subscript has finished

****** Script Extract_PCard Recs2 ********

OPEN Key_Words

LOCATE RECORD v_Rec_Ctr – positions ACL at the next Key_Words record (rec 1 the first time it is run)

v_Keyword = Keyword_Value – sets v_keyword to the value in Keyword_value (e.g. v_keyword = “Bar”)

OPEN Pcard_Trans

SET FILTER FIND(ALLTRIM(“%v_keyword%”), Merchant_name) –  (e.g. FIND(“Bar”, Merchant_name)

EXTRACT FIELDS ALL v_keyword as “Keyword” to PCard_Results APPEND – extract records meeting filter, plus a filed which contain the keyword value that caused the record to be extracted

V_Rec_ctr = v_Rec_Ctr + 1 – increments the record counter to get the next record in the Key_Words file.

Another option would be to combine your Keyword value values into a string (field) and use this field in your transaction file.  However, it depends on the length of each Keyword value – will all the keyword values fit into 10,000 characters when you include a “;” separator?

SET SAFETY OFF
OPEN Key_words
COUNT
V_Max_Ctr = COUNT1
v_Keyword_list = Blanks(10000)
v_found = F
v_ctr = 1

GROUP
v_keyword_list = ALLTRIM(v_keyword_list) + ALLTRIM(Keyword_Value) + “;”
END

OPEN PCard_Trans

GROUP
LOOP WHILE v_ctr <= v_Max_Ctr AND v_found = F
v_found = T IF FIND(SPLIT(v_Keyword_list, “;”, v_ctr), Merchant)
EXTRACT FIELDS ALL SPLIT(v_Keyword_list, “;”, v_ctr) AS “Match_Code” IF v_Found = T to PCard_RESULTS
v_ctr = v_ctr + 1
END
v_ctr = 1
v_found = F
END

OPEN PCard_Results

SET SAFETY ON

**************

Other common examples of this type of analysis are:

  • Amount charged compared to master file with products and different prices at different times
  • Amount paid compared to Contract file – with products and different prices

Basically anything where you want to compare detailed transactions with a historical record of that includes changes (e.g. different prices for specific time periods).

There are others ways to accomplish the analyses discussed above, but in every case it must start with a good understanding of what you want to accomplish.  Such as:

  • Do I want to see all the records or just those that don’t match?
  • Do I have duplicates in the Secondary file or not?

I hope this has helped you to understand three ways to extract records meeting criteria that are contained in a second file.

 

The Data Analytics Conundrum

Studies after study have shown that data analytics is more effective and efficient at detecting risk, and identifying control weaknesses, non-compliance, and inefficient business processes.  Chief Audit Executives (CAEs) have repeated stated that data analysis expertise is a much needed skill in internal audit, and IIA surveys of software over the past 10-15 years have rated data extraction, data analysis and analytical software as critical tools for effective audit organizations.   Why then do more than half of the internal audit organizations still rate their analytic capability as poor or needing improvement?

I have been an internal auditor for 30 years and have been a user and advocate of analytics for 28 of those.  (It was during the first two years of auditing that I realized I could do a better job by analyzing data.)  I have been asked hundreds of times, “How can we develop and maintain an analytics capability?”  Too often CAEs give up without even trying (“we are doing good audits now, why change things?”; or only make a feeble attempt at it (“let’s get a programmer right out of college and have them develop analytics for us.”)

The reality is change is difficult.  As auditors we are constantly making recommendations to help others improve, change, do more, etc. but we ourselves stick with traditional auditing tools and techniques.  Perhaps what is needed is a taste of our own medicine.

In order to successfully implement analytics and integrate data analysis in the audit process you must have a formal development and implementation plan.  The plan must address the need for sufficient people (appropriate level and number), technology (primarily software) and processes (you need to change the way you currently perform audits).  It must also have a project manager who will be held accountable for delivery on the plan, clear objectives, milestones, and a reporting requirement – to the CAE and/or the audit committee (but only if they fully support the adoption of analytics).

Analytics also requires an understanding of the business processes, the data and supporting them, and a solid grasp of internal auditing processes and requirements (e.g. application of the IIA standards).  None of these will be provided by junior level audit or programming resources.  Rarely will all these skills exist within one individual; and they might not already exist in your audit organization.  Rather than being an impediment, this should be seen as an opportunity: an opportunity to obtain the right resources and task them with a clear objective.  If you are lucky and have the appropriate type of resources in your organization – this is ideal.  Existing resources should already know the business processes and have the internal audit skills, and perhaps have some analytical capabilities.  However, they will need to be supported by training and software, and given sufficient time to develop the skills and implement the functionality.  Most importantly, they will need to be dedicated to analytics.  Otherwise you end up pulling valuable resources away from other priorities and tasking them with something in addition to what they are already doing; or settle for a subset of the required skills.  In either case, it is a recipe for failure.

A statement I hear often is, “We are a small audit organization, and we can’t afford to dedicate a person to analytics”.  It is usually used as a rationale for not using data analytics.  My response is something along the lines of “Because you are small does that mean you can afford to be less efficient and effective?”  The reality is, unless you are using analytics, you are not addressing risk, testing controls, examining compliance and improving business operations to the extent that you could be.  If you are going to decide not to use data analytics, at least make it an informed decision.  Examine the costs and benefits and then decide.  Don’t simply look at your existing resources, which are most likely being used to the maximum, and decide that you can’t take on anything else.  It is not a question of doing more with the same resources.  Ask yourself if there are things that you don’t need to be doing or if they are better ways to do what you need to do.  Also look at what you are not doing and determine the value-added if you could do those things.  Then decide if you can afford not to be using data analytics.

I also get asked about audit software, “which package should I use?”  This is something that should be decided based on your requirements and your short- and long-term plans for analytics.  I encourage you to fully utilize the existing capabilities such as standard reports and you can definitely start with Excel, but don’t be limited by what you have – think about what you need.  Find out what other audit organizations are using.  For more than 10-15 years, the IIA magazine, Internal Auditor, conducted a survey of software usage and the results were printed in the August issue.  The results have consistently shown that ACL is the most used audit software for data extraction, data analysis, and fraud prevention and detection in the world and has been for almost 15 years.  It is the software I use, so I may be biased, but just because you are biased doesn’t mean you are wrong.

In conclusion, you should be using data analysis.  You will need to plan and manage your adoption of analytics.  It will take time, resources, and technology.  It has to be integrated in the audit process (planning, conduct and reporting) and developed with an understanding of the business processes and the underlying data.  It is easy to do wrong, but worth doing right.

Why did I title this “The Data Analysis Conundrum”?  Because I don’t understand why we are still talking about the “Why, How, and What” of data analytics and not simply getting on with the job.  Stop asking questions about analytics – get off the fence and actively pursue it.  The successful implementation of analytics will add significant value to the internal audit function and your ability to support the goals and objectives of senior management.

Year 31 – Incomplete Property Tax Data

Even though I am retired, I am still getting to do interesting analysis – perhaps even more so because I get to pick and chose want I want to do.  In this case, I was asked to assist with an audit being performed by a municipal audit function.  A large city had an internal audit group that was interested in auditing revenue – primarily from property taxes.  One of the key objectives was to determine the completeness of the property tax data (i.e. were all properties included in the property tax data base?).

I suggested examining other external data sources including: water meter, building permit, and hydro data.  The city was not responsible for issuing and collecting on hydro bills so only the first two sources were available – but even that was a good start.

Using water meter data we were able to identify all addresses with water meters and calculate the water usage.  We had to be sure to use the correct information because the data also had billing address which could be different from the address where the meter was located.  The data also had the meter install date and the first and last date the meter was read.

The property tax data also had a billing address and the physical – building location – address.  But, before we could compare the property tax data with the water meter data we had to normalize the address fields.  Even the address entries with each system were inconsistent, with variations such as: “St.”  or “St” or “Street”;  and “Unit 12 123 Main St” or “Unit #12 123 Main St” or “12 – 123 Main St” or “123 Main St Unit 12” or “123 Main St Unit #12”.

In the end we were able to match the water meter data with the property tax data to identify properties with buildings that were not paying property tax.  While there were only a couple of hundred properties identified this still represented a total of over $1M in missing revenue.

The next source of data to test the completeness of the property tax information was the building permits.  Building permits were issued for new building and for major renovations on existing building.  The data including the address, the permit issue date, the inspection date, and status (e.g. completed).  Again, the only common field was the address so we had to clean the data before doing the comparison.

Using the permit and property tax data we were able to identify new and existing buildings that were not paying property tax.

The next risk that we decided to address, after looking at the data, was the timeliness of the property tax data.  We had notice significant delays in the time between the issuance of the permits and the inspection – sometimes several years.  There were also long periods between the water meter install and the date of the first reading.  We also noticed inspection and water meter reading dates that were years into the future.

The auditors determined that some of the dates were important and had a bearing on the amount of property tax paid in the year.  For example if a building was not completed until July 1st, then property tax was only 50% of the yearly amount.  The start date for tax calculation was the date of the final inspection.  Delays in the inspection meant less property tax and dates into the future meant no property tax was collected.

Again, there were only a few hundred issues with dates, but the results included one case of fraud and hundreds of thousands in lost revenue.

In the end the audit identified issuing around completeness of the property tax data, resulting in loss revenue, and made recommendations that would integrate the separate data sources.  We also identified data entry control weaknesses with respect to key dates and made recommendations which include: range checks and aging reports.

ACL Commands: FILTER, EXPRESSIONS, DUPLICATES, AGE, STATISTICS, INDEX and RELATE

Lessons-Learned: even though the audit had a very specific objective – assess the controls ensuring the completeness of the property tax data – there was a multi-phased approach.  During the initial planning phase the idea of testing completeness by comparing property tax data to other data sources led to a data analysis approach that had not initially been considered.  Despite challenges finding fields to use for the Relate command, perseverance and use of scripts from ACL scriptub such as ”Address Clean” made the task easier.

Another important lesson was the examination of the data in the meter and building permit data prior to performing the Relate in ACL.  It is important to assess the integrity and to understand the data before performing the Relate.  Had we simply performed a Relate using the address field and not examined the data first, we would not have identified issues with respect to the date fields.

When using and source data, even if only to test completeness of another system, you should analyze the source data.  High-level summaries such as Classify, and simple Aging and Statistics on date fields can be very informative and may identify additional risks to be addressed during the audit.  Not only did we identify control weaknesses around key date fields – and a fraud involving a deliberate future dating of inspection data in exchange for kickbacks – but the high-level summaries identified duplicate records which had to be addressed before we could perform a proper Relate.

Simply having access to data provides an opportunity to assess risks and controls in the system.  My consulting contract was over, so I am not sure what happened after I left, but I image that the auditors will be examining the building permit and water meter processes in future audits.  Based on our limited review of the data, there are control weaknesses and fraud risks in these business processes that should be examined.

Note: In thinking back on the analysis performed, I wish I had performed an additional analysis examining unit and apartment numbers for addresses with multiple units.  Units or apartments that are not in the property tax data (e.g. Apt 101, 102, 103, 104, 105, 201, 202, 203, 205  – possibly missing Apt 204; or Unit 10, 12, 18 – possibly missing 14 and 16) could be instances of incomplete data.  I do realize that this would likely involve more coding and manual follow-up to eliminate the false positive, but it could be worthwhile.

Adding Value to Compliance Audits – part2

The following posts is part 2 of “Adding Value to Compliance Audits”

Given a good understanding of the current level and sources of risk, the next step is to look at the requirement for, and the adequacy and effectiveness of, the control to mitigate the risk.  This requires an understanding of the cause and source of the risk and the operation of the control.  Is the control still required?  Does the current control address the root cause?   Are there better ways to mitigate the risk?  By answering these questions the audit may identify unnecessary controls, ineffective controls, or identify better controls to address the current risk.  All of which may reduce the cost of compliance, while improving risk mitigation.  Recommendations such as automating a control can save time and effort and been seen as a real value add.

The next step would be to verify that the control activities are being performed (i.e. compliance).  However, you are not done yet.  If you find non-compliance it is still not sufficient to recommend “Do A”.    Audit recommendations should address the root cause.  Identifying a lack of compliance is not the same as determining why management is not complying (i.e. determining the cause).  Was management aware of the requirement? Are they capable of complying? Are there compensating controls that have been implemented?

In order to determine the cause of non-compliance asking “Why” (usually several times) is often sufficient to determine the cause.  You should also determine the impact of non-compliance.  Then instead of “Do A” audit can provide a rationale and make a recommendation that actually assists management in complying.

The next step is to ensure that you are doing the audit right – this means maximizing your use of all your resources available to you, including analytics.   Data analytics can be defined as the application of analysis techniques to understand business processes, to identify and assess risks, to test controls, to assess efficiency and effectiveness, and to prevent, detect and investigate fraud.  Data analytics techniques, ranging from simple categorization and stratification to sophisticated predictive and prescriptive models, can assist organizations in focusing their risk responses on the areas in which there is a higher risk – including compliance risk.

Existing levels of risk can be assessed and trends identified to determine if the risk is increasing or decreasing.  For example, environmental compliance could examine spills (number and quantity), cleanup costs, and lawsuits (quantity and value); production compliance could examine material, personnel, maintenance and operational costs.  By examining measures over several months or years, a trend line can be produced to assess the effectiveness of mitigation efforts and identify emerging risks.

Rather than relying solely on substantive tests, the effectiveness of controls can also be tested with analytics.  In addition, you can look at trends that will have positive or negative effects on compliance.  For example, environmental compliance can examine the control over the purchasing of hazardous materials – ensuring that the purchase quantities match requirements – thereby avoiding environmental compliance issues around disposal.   Compliance with hiring practices could review staffing methods, staffing rates (by gender, by race, etc) to ensure proper procedures are being followed and address employment equity requirements before they become non-compliance issues.

Sometimes compliance with a poor control can increase risk and dysfunctional behaviour; and cultural issues can make enterprise-wide compliance difficult for global companies and increase risk.  Doing the right compliance audit – not simply “did we do A” and doing it efficiently and effectively can result in significant value to the organization and remove the “got ya” stigma of compliance audits.  However, it requires auditors to re-look at the compliance-related risk and controls and use analytics.

Richard Chambers, President of the IIA, identified his top ten imperatives of the decade which highlight the challenges that auditors must face to provide value to management.  These include enhancing proficiency with data mining and analytics; providing assurance on risk management effectiveness; and enhancing and leveraging a continuous focus on risk.   These challenges can be applied to all types of audits from compliance to operational.  He encouraged auditors to look at business objectives, risks to the achievement of objectives and design audits that provide assurance over the governance, risk management and control/compliance frameworks put in place by management.  A compliance audit should not be any different: it should identify and assess risk; and examine the effectiveness and efficiency on the controls to mitigate the risk.   By doing so, it will add-value to the company as well as provide assurance to senior management.

Accounts Payable Example

In an Accounts Payable audit there was a requirement to formally authorize invoices for payment by stamping and signing the original invoices.  The stamp and approval verified that goods/services had been received in accordance with the contract and that the invoice could be paid.  Falsifying this approval had serious legal repercussions – including up to 5 years imprisonment.

The audit covered numerous accounts payable offices spread across the globe.  As a part of the audit we verified that invoices had been properly approved i.e. stamped and signed by the authorized approval authority.  At several locations we noted that the invoices were not being properly authorized (stamped and signed).  But the reasons for non-compliance differed.  In one small office (AP1) they were unaware of the requirement.  We identified an underlying problem with corporate communication of financial regulations, including a lack of translated procedures.  In another office (AP2), they had been told by the legal department that the stamp that was being used did not contain the appropriate wording and they should immediately stop using the stamp and obtain the official corporate stamp with the correct wording.  The local A/P manager had been trying for months to obtain an official corporate stamp – he even showed us numerous emails – to no avail.  At another location (AP3) they had converted to electronic invoices and authorization – so they were no longer stamping and signing invoices.

A compliance audit that did not ask “why” might easily have issued the simple recommendation “stamp and sign all invoices” – adding zero value to the A/P process.  Adding value to this compliance audit would have had very different recommendations.

Starting with the risk: The control was put in place to ensure that we were not paying for goods/services we did not receive; and that goods/services were of the quality, quantity, and price agreed to in the contract.  Given the nature of decentralized contracting, the risk still existed and a control was required.

The second step would have been to determine if the control was effective and efficient.  At AP1, the control was not working because of a problem in the corporate communication area – we had acquired a new overseas operation and regulations had not been translated.  This required a different recommendation; one that would address the root cause – corporate communication – and did not penalize the local A/P manager.

At AP2 non-compliance was attributed to a breakdown between the legal and finance departments.  Legal was reviewing all official stamps and finance was responsible for updating, revising and supplying them.  Unfortunately, the two departments were not coordinating their work and finance was unaware of the problem with the invoice authorization stamp.  This recommendation addressed the communication between departments.

At AP3, the physical stamping and signature of the invoice had been replaced by an automated approval.  Recommending compliance with the current regulation would be ludicrous.  However, the automated controls needed improvement to verify the authority of the person providing the electronic approval.  As a result, a recommendation was made to address the weakness in the automated control.

The result of the compliance audit improved the corporate communication processes, interdepartmental activities, and IT controls.  The recommendations were seen as having value – much more than “Stamp and sign all invoices” would have received.

In addition, the audit of the efficiency and effectiveness of the A/P process can benefit from the use of analytics.  The controls over approval can easily be done by matching the electronic approval with a database of approvers.  Examining actions by users can identify instances where separation of duties was not achieved.  Totals by payment terms or payment method can quickly highlight inefficient practices or even fraud.  The resulting recommendations can improve compliance and reduce business risks while adding value.

Missing items

I have been away on vacation and now I have exams to mark and Christmas preparations to finish.  So, I must confess that these examples are fillers as I have been too busy to write much else these days.  However, I do still feel that they have value.

Sometimes fraud is detected through the identification of missing items or transactions; in other cases unexpected transaction are found, highlighting the fraud.  However, in some fraud investigations identifying what is not there can be as important as finding out what is there.  Missing accounts receivable payments, branch offices not reporting revenue or unrecorded receipts of goods are just a few of the many symptoms of fraud.  In a similar fashion, duplicate invoice payments or payroll check numbers may be a result of fraudulent activity.  The following two examples illustrate the utility of identifying missing items in a sequence or unexpected consecutive items.

Case Study: Free Calls

Employees were permitted to use their office phone to make long distance calls as long as they paid for the charges.  From the employees’ perspective, it was a good deal since the company had a special discounted rate with the telephone carrier.  From the company perspective, it was a good deal too.  Employees liked the arrangement and it increased the number of calls that allowed the company to negotiate a better long distance discount rate.  Every week Caroline in accounts payable reviewed the long distance bill and identified non-company numbers called.  The person, from whose phone the call was made, was asked to complete a ‘long distance call’ slip detailing the date and cost of the call.  Each quarter, the employees were required to submit the long distance call slips and reimburse the company for the ‘personal calls’ portion.

One month, William accidentally lost a long distance call slip and, as a result, only reimbursed the company for the slips submitted.  Latter, he found the missing slip, but did not disclose the error.  When nothing happened, he deliberately failed to include several long distance call slips from time to time.

At the end of the year, the auditors tested the controls over the reimbursement of long distance charges and noted that there was no control to ensure that the employees were paying for all the personal calls they actually made.

The auditors reviewed the reimbursement records using the Gaps command.  Since the long distance call slips were pre-numbered, the test easily identified 26 missing slips.  The results were presented to Caroline who matched the numbers from the missing slips to the carbon copies of the slips.  William was identified as the culprit for 25 of the missing slips.  When approached by the auditors, he admitted to neglecting to include all of the weekly slips, and was ordered to reimburse the company.  In accordance with the strict policy on fraud, he was fired.

Cash Study: Frequent Traveler

The assistant to the President was required to accompany the President on many of her frequent business trips across the country.  As a result, the auditors did not normally question the high travel expenditures of the President, or of her assistant.  However, they had received an anonymous tip that the assistant was committing fraud.

During the initial fraud investigation of the assistant’s hotel bills, they calculated her total travel costs by type, and noticed that she had significantly higher accommodation expenditures than the President.  The team leader was curious as to why this was the case, and instructed his team to review all transactions related to her hotel expenditures.  In particular, they sorted them by invoice number and used the Duplicates command to look for duplicate invoices.  They also used the Gaps command to examine the sequence of these expenses.  As expected, there were no duplicates and the Gaps command run on the invoice number revealed many gaps in the sequence of the hotel invoice numbers. This was not surprising since each hotel used its own invoice numbering system and had many clients, each with their own invoice.  What surprised the auditors was that the analysis showed 10 bills from one hotel that had no gaps between – they were in sequence 20311 to 20320.  This was the case even though the dates of the invoice spanned several months.

The auditors checked with the hotel and discovered that the assistant had stayed at the hotel on the dates in question.  However, the billing manager told them that the invoice numbers were not part of their invoice sequence and had not been issued by the hotel.  The auditors brought the results of their analysis to the attention of the President and received permission to question the assistant.  The assistant admitted to using her computer to scan in a real invoice from the hotel.  She then used this scanned invoice to make copies – thereby falsifying her travel claims.  She would inflate her hotel bill every time they stayed at that hotel, which was often since the President had a lot of business in that city and always stayed at that hotel.  She unwittingly had simply incremented the invoice number each time she generated a new invoice.  The Gaps command allowed the auditors to find the altered invoices and discover her scheme.

Lessons-learned: The ability to see indicators of fraud in the data includes not only seeing what is there (and is not expected) but also what is not there (and is expected to be).  Missing items can range from inventory tags, purchase orders, health claim forms and even transaction ids in ERP system (why are we missing 6 transactions?).  In other cases, you should have gaps – in fact you shouldn’t have consecutive items (e.g. invoice numbers, credit card numbers (anything with a check digit), and various types of receipts.

The ability of auditors and investigators to analyze data is enhanced when they can manipulate and work with the data.  Sometimes it is necessary to create new fields, not existing in the original file – such as Total value of inventory (quantity on hand * unit price).  ACL is ideal for this type of analysis – allowing new field to be created and then compared with existing fields or data in other files.  It can perform simple calculations (quantity * unit price) or conditional calculations (markup is 57% if product code is ‘01’; 68% if product code is ‘02’ and so on.).  Whether there are millions of records or only thousands – the analysis is fast and easy; and the results are often revealing.

Year 29 – 2016 – Fraud

 Hidden Costs

The true cost of fraud is more than the total of the financial losses.  Stockholder confidence, employee morale and other intangible factors must be added to the monetary losses.  Most managers agree with this assessment; however management often encourages fraud by placing unrealistic goals on employees, or by disregarding the rules themselves.  Auditors must be aware of the pressures placed upon employees that may lead them to commit fraudulent activities.  In addition, the controls, to be effective, must apply to all employees and must be uniformly enforced.  Performance goals and objective should be attainable and adequately reviewed and monitored.

ACL, and other software, are powerful and flexible and can be used to run analyses to detect and even deter fraud.  However, no tool is more powerful than it users.  Auditors and fraud investigators, trained in the use of auditor software, innovative in their approach to the combating of fraud can make a difference.  The cost of fraud demands that we devote time, energy and resources to the battle.

Money Laundering Scheme

Regulations around anti-money laundering have increase, but it still happens.  Some of the larger schemes include the following:

Back in 2012, HSBC forfeited £1.2 billion for having inadequate money laundering controls. This followed a report published by the US Senate which alleged, amongst other things, that HSBC had:

  • supplied banking services and American dollars to some banks in Saudi Arabia in spite of their connections to terrorist financing
  • dodged restrictions created to prevent transactions involving Iran, North Korea and other countries subject to international sanctions
  • HSBC US didn’t treat its Mexican counterpart as high risk even though it has a problem with drug trafficking and money laundering.

In 2010 Wachovia paid federal authorities a total of £123.7 million for willingly failing to establish an adequate AML programme and subsequently allowing, from 2004 to 2007, the transfer of an estimated £292.5 billion into dollar accounts from money exchangers in Mexico that the bank did business with. This included nearly £10 million that went through correspondent banking accounts at Wachovia to buy aeroplanes to be used in the drugs trade – more than 20,000 kg of cocaine was seized from these planes.

Audit example:

Bill was suddenly a top salesman for the XYZ Insurance Company, selling more policies in a single month than had ever been achieved by a salesman in the company’s 20 year history.  In addition, few claims were being made against the policies he generated.  He easily met his targets and received the maximum bonuses, plus incentives such as trips and merchandise.  Until the story broke in the national newspaper, management had no idea that Bill’s policyholders were using the insurance company to launder ‘dirty’ money.

The requirement for companies conducting illegal activities to launder their illegal gains has generated many different schemes.  An increasingly used method to launder money is one where the money earned from criminal activity is used to purchase insurance policies with a ‘cash out’ clause.  Such a policy may pay the holder up to 80% of the policy value if cashed out within one year.  The purchase of millions of dollars in insurance policies and the subsequent cashing out of these policies can covert 80% of the dirty money into ‘clean’ money – a good return for the criminal element, and a good deal for the insurance agent, and even profitable for the insurance company.  However, the negative publicity that may be incurred might outweigh the cash benefits to the insurance company.

The audit director of another insurance company read the newspaper headlines with concern – was this happening at his company?  The next day he initiated an audit of all insurance policies to look specifically for this problem.  He met with the team leader, and explained what had happened at XYZ Insurance.  The team leader had also read the newspaper article and thought that this would be a perfect application for the new CAATTs software he had purchased recently.  He explained that the AGE command could be used to examine the length of time between the opening and closing of a policy.

The team obtained access to the policy file, and used the Age() function to calculated the time between the policy start and closing dates.  Further, by combining the results of the AGE() function with the CLASSIFY command, they were able to determine the number of policies, total length in years and the average policy life (in years) by salesman.

This quickly highlighted possible anomalies in sales practices, and assisted the team leader in checking for potential money laundering activities – whether knowingly abetted by the salesperson, or unknowingly.

The auditors reviewed the insurance claims raised by Paul, and found that many of them were made in the name of only a few individuals.  One person had opened 32 different policies, all of which had been canceled within two weeks.  Paul admitted that he was helping to launder money, and was fired.  Since the company had been proactive in finding the fraud, it was able to correct the problem and keep the incident out of the media, and avoided the adverse publicity.  The audit director smiled as he remembered what his first audit director had told him ‘Your job is to keep the company off the front page of the newspaper’.  This time, that was exactly what he had done.

Year 27 – 2014 – Car Maintenance – Part 1

The company I worked had a fleet of cars that we maintained, and when beyond a certain age, were sold.  The analysis below describes an audit that looked at the controls around both of these processes.

The new manager of the company garage had only been in charge for a year and was already well respected and well liked.  He and his assistant provided quick and efficient maintenance service for all the company cars.  The garage also contained a gas pump and was considered a ‘Full Service’ station.  Unfortunately, the manager definition of full service went far beyond what the company’s management would have liked.

The garage manager was permitted to perform work on employee’s vehicles, as long as the employees were charged the full cost of the parts used for the work performed.  The company allowed employees to purchase automobile parts for their personal cars at the company rates that were reduced considerably when compared to the retail value of the parts.  The employee would purchase the parts and the company was invoiced at the discounted rate.  The employees would then submit their payment and invoice to the parts manager who would remit the money to the company; and the company would pay the vendor.  However, the manager was ‘correcting’ invoices and making it look like the parts had been used for company cars.  He would keep the money and the company would pay the bill.

The manager was also responsible for the disposal of used vehicles no longer considered economical to maintain.  The manager managed to sell many of the used vehicles to a friend at 65% of the book value.  The friend then sold the vehicle for the book value and split the profits with the manager.  The process called for sealed bids to be submitted by persons wishing to buy the vehicles.  However, the manager would show prospective bidders a car in much worse shape than the one actually being sold; or would invent stories of accidents or mechanical troubles the car had been through.  As a result, the bids from other buyers were usually even lower than the friend’s bid.

Vehicles, which were to be sold, were equipped with new tires, mufflers and other parts just prior to being sold to the manger’s friend.  This significantly increased the value of the vehicle being sold to the point that sometimes the new parts were worth more than the car.

Finally, the manager would fill up employees’ cars at the gas pump and charge the gas to a company car.  The company maintained a fleet of cars for use by employees.  A credit card was kept in the glove compartment of each card to be used when employees ‘purchased’ gas or had repair work done on the vehicle at the company garage.  However, several employees, who were friends of the service manager, were bringing their personal cars into the company service garage for maintenance and even filling up their tanks with gasoline.  The cost of the gas was charged against the car’s credit card.  The service manager then charged the employees half the actual cost of the gas ‘purchased’.  The employees benefited from only having to pay half of the cost and the manager kept all the cash he received.

The auditors were performing the yearly review of the garage operations.  They were totally unaware of the frauds being committed by the manager and his assistant.  This did not, however, stop them from finding out what was happening.

The first analysis performed by the auditors was to total the repair work by vehicle.  They were quite surprised by the total dollar value of the repairs performed on the company cars.  A refinement to the analysis separated the vehicles by year of purchase.  The manager had been so busy with his scheme that even newly purchased cars were showing repair work.  The auditor was particularly suspicious when invoices were paid for parts on cars that were still under the original warranty.  The analysis revealed that some cars less than one year old had undergone as much repair work in the last year as cars much older.  The auditors calculated the total repairs by type of repair to determine the 5 most costly repairs performed.  Next the auditors then totaled, by vehicle, the number and amount of repairs, by type of repair – tire, muffler, alternator, tune up and battery, for all cars less than one year old.  They found that three cars that were less than one year old had a more than usual amount of repair work.  One had four new mufflers installed and another had 12 new tires.

Because of this startlingly finding, the team lead decided to expand the scope of the audit and perform more testing on the repair work and examine the controls over the sale of used cars.  Next week we will see the results of their expanded analysis.

Year 22 – 2009 – SAP Extract

Imagine my excitement when I had 7 responses to my previous post on Payroll and then my utter disappointment when I found out that all we in Russian and had nothing to do with the content of my blog.  This continued for several days and suddenly switched to English posts about Credit Unions.  In total I had over 65 spam bot posts including two that wanted to help me monetarize my site.

On the positive side, I was talking to Franco who said that he reads my blogs every Monday and the most recent post gave him some ideas of a payroll analysis he wanted to perform.

Audit example – Standard SAP extract:  By now, I had been extracting SAP data for 10 years and had developed a “Standard SAP Extract”.  While SAP has more than 70,000 tables, I was using 2 main tables and 9 master tables.  Using this set of data, I had supported hundreds of audits.  In addition, I had changed companies twice and was able to use the exact same extract (and all of my ACL scripts) at the new companies.  Unfortunately, it took between 6-12 months to get the extract built at the new company.

Continue reading Year 22 – 2009 – SAP Extract