Posts

Life Lessons

I often write about analytics and other aspects of data extraction, interpretation and the like.  Today I thought I would do something different.

I am teaching a 4th year university/college course.  The students are eager to get their CMA/CPA designation and go off to work for a big accounting firm.  I admire their zeal, but worry about the lack of practical “seat of the pants” knowledge.   I tell them that for 16 years their focus has been on “marks” and for the next 30 years marks mean nothing.  They need to focus on interpersonal, communication, analytical, strategic, how details support the big picture and other skills that schools tend to ignore.

What has experience taught me? Or “what I wish someone had told me”.  So here goes, a list of my life-lessons:

  1. I should have taken more chances. I was always worried about failure – which I didn’t try things.  What I realize now is that not trying is pretty much the same as failing.  Except for one thing, you can learn from failing – you can learn from not trying.  It just reinforces the notion of not trying.  I was fortunate to have mentors and managers who forced me to try new things and move beyond my comfort zone – things I never would have done on my own.  In every failure there is a lesson-learned that you would not have discovered without trying and failing.  These can motivate you to try again and ultimately succeed.
  2. There are many wrong and many more right paths. There is no single way to get to a destination.  Initially, I never even considered internal audit or data analysis, but the paths that I have taken have helped me to succeed in data analysis.  My years in guidance and counseling have helped me to relate to programmer, business process owners, and auditors.  This in turn has allowed me to better understand their needs and assist them.  My stint as a consultant has helped me to focus on critical issues and develop work plans to achieve results with time frames and constraints.
  3. Take on challenges that are outside your comfort zone. Many organizations have charitable campaigns, university recruitment programs, and other opportunities for you to try new things.  Typically, these are ideal learning opportunities because the other people involved are there because they want to be.  They are motivated, they are people –people, and they are great people to know (from a career perspective).
  4. Seek out mentors at various levels. These can be part of a formal mentoring program or simply people who have experience that they are willing to share.  On the same note, be a mentor.  Being a mentor will cause you to consider the “why” things are the way they are.
  5. Take what you like about your job and enlarge it. There are always parts of a job we hate, but we spend too much effort on avoiding these and not enough on developing and enhancing the aspects we love.  Almost every job I took on looked totally different after I had been there a few years.  I was willing to develop, enhance, expand the areas I enjoyed while completing the less interesting duties.  You will always find time to do things you love.
  6. Put yourself in other people’s shoes. What motivates them and what pisses them off.  Understand that they have priorities which will differ from yours and work with them to accomplish what needs to be done.
  7. Never bring your boss a problem. You can highlight issues, but always follow this by a suggestion for improving the situation.  I worked with a brilliant programmer who always pointed out the problems with any program development activity.  I had to constantly say. “OK it is a problem, but what should we do”.  He knew the answer, but it was frustrating to have to force the answer from him.  He became well known for being a program killer when in fact he was an incredible resource.
  8. Make the path straight for your subordinates. I always thought that my job was to make my employees’ job easier.  I would act as a buffer, protecting them from negative and non-constructive feedback.  I would fight on their behalf (without them knowing).  On the flip side – when they succeeded, I looked good so it was a win-win situation.  The better they did performing tasks for which I had responsibility, the better my manager thought I was doing.
  9. If you don’t know – say so. I was often the good-to person when it came to analytics and information systems.  Quite often I would get ask questions for which I did not have an answer and I never hesitated to say “I don’t know”.  Too many people see admitting a lack of knowledge as a weakness.  Counter-intuitive as it may sound, by saying “I don’t know”, my credibility and stature grew.  However, I was also quick to follow “I don’t know” with “but I will find out and get back to you”.
  10. Be respectful of everyone regardless of their level and position. One of my first jobs I had some photocopying to be done.  I left the originals and a short note “25 copies, double-sided, Dave” on the desk of the person responsible for making copies.  The person was not impressed with my lack of common courteously.  I should have said “please make 25 copies, double-sided.  Dave”.   Better yet, I should have included her named “Carole, please make 25 copies ….”  Since then, I have thanked the person who empties my garbage can and vacuums the floor.  I have tried to learn the names of the security guard, the janitor, etc. to the same extent that I want to know the CEO, CFO and other senior management’s names and faces.  This simple recognition of others and the work they do has made my job more pleasant and brought me as much benefits as it did those to whom I have treated appropriately.
  11. Be your own career manager. Twice I chose to take a position that offered less pay than my current position.  Why?  Because I thought it was better for me at the time and in the long run.  I doubt any career manager would suggest you take a 15% pay cut.  They don’t know your current situation, your aspiration, goals and time lines as well as you do.  So take charge of your own career.
  12. Make friends at work. You will be spending 8-10 hours a day, 5 days a week, with your co-workers, maybe more time than you spend with friends and family.  Be nice to them and give them every reason to be nice to you.  Join in office parties, social events and even coffee breaks.  Some of my best friends are former co-workers.
  13. Work to live; don’t live to work. While you are managing your career – manage your life.  You can’t wait until you retire to start enjoying yourself.  Discover new and interesting leisure activities.  Most of all – make time for you, and your family and friends.
  14. Never stop learning. Grow as an individual and as a professional.  Learning new things can stimulate and energize you when you need it most.  Sometimes a hobby can become a second career or simply a source of joy and amusement.
  15. Learn what is important to you and add it to this list.

Statistics

Sometimes even the simplest analytics are extremely useful.  They often tell you what is happening in the detailed transactions without you having to make assumptions about the data.  STATISTICS is one of those commands: with a couple clicks of the mouse you can get:

  • Min and Max
  • Highest and Lowest ‘X’ values
  • Number of Positive, Negative and Zero valued transactions
  • Average
  • Median, Mode and Q25 and Q75

But like any analysis, you need to know why you are running the analysis and you need to be able to interpret and understand the results.

Why run STATISTICS?  I run Statistics to get an overview of my data.  It helps me to understand the nature of the transactions, to verify my scope, to highlight anomalies, and much much more.  Here are a couple of examples:

  • Count
    • I use this to verify the control totals. In one case I found that my Total for a critical field was out by a significant amount.  It turned out that the numeric fields had 3 decimal points (implied) and not two as I was told.  Without verify my control total before doing additional analysis, I would have wasted a lot of time.
  • Min/Max values:
    • All transactions should be in the previous fiscal year. The Min value should be less than or equal to the start of the fiscal year and the max should be greater than or equal to the end of the fiscal year.  I found billing transactions that were in the wrong fiscal year.
    • All credit card (p-card) transactions should be less than $500.00. Not only did I find amount greater than $500, but I also found numerous credits being processed – which led to a fraud discovery.
  • Highest/Lowest values:
    • The Maximum value was significantly higher than the second highest value. It ended up being an error where the invoice number was entered into the amount field – making the amount much larger than it should have been.
  • Number of Positives/Negatives and Zero values
    • Found hundreds of thousands of transactions with zero values which led to a training issue. A/P clerks could use bulk entry forms for simple invoices.  The form automatically generated a blank invoice entry form with 100 lines.  If the clerk did not type “Cntl –“ before saving the entered invoice details, the blank (unused) lines remained and were processed as invoices with zero value.  This was causing the system to run slow and generating exception reports.
    • Found several hundred Negative “quantity received” amounts. The Receipting clerk at a warehouse was entering negative receipts after entering the correct receipt amount.  Invoice was paid based on the correct receipt amount; but the negative receipt entry (without an invoice number) reduced the quantity in inventory and allowed her to steal the items.
  • Mode value
    • This identified a fraud where an employee was entering meal charges when not on travel status. Amounts under $10.00 did not require approval.  The employee entered hundreds of meal claims at $9.99 which was pickup by the mode option.
    • The most often occurring date was just before bonuses were determined. Salespersons were entering a large number of fictitious sales on the day before performance bonuses were calculated.  The sales were reversed at later dates.
    • The most occurring amount for contracts was just under the “must go competitive” value. Managers were establishing contracts at $49,900 – which was below the $50,000 limit for sole-sourced contracts.

In general, STATISTICS lets me see what the data looks like from a high-level perspective.  However, it also can quickly identify anomalies (e.g. negative when all transactions should be positive; values outside the expected range; unusual maximum values, etc.).  .  This helps with determining risk, sampling strategies, identification of control weaknesses, materiality amounts, etc.  When combined with STRATIFY, I can get an overview of the ranges (or strata) of the data which provides even more useful information.

CLASSIFY with Statistics Option

I have been using ACL software for more than 25 years – back before the Windows version even existed.  It wasn’t the first audit software that I had used, and I was pretty sure it wouldn’t be the last audit software I would use.  Previously I had used AutoAudit, Audit Analyzer, Decision Analyzer, IDEA, ReportWriter, Easytrieve and various spreadsheet packages (Mulitplan, Visicalc, and Lotus1-2-3).  I told myself that I would give ACL a try until it couldn’t do what I needed and I found another package did.  As I mentioned that was more than 25 years ago – and I am still putting ACL through its paces and looking at other software.

Sticking with a particular software product for this length of time has its advantages.  You build up a certain level of expertise (mainly by trying, failing and trying again).   I must have about 10,000 scripts to do various analyses that I have performed in the past.  However, it also has a distinct disadvantage: you have to keep up with the new functionalities being released.  This has become more of an issue of late as major releases seem to be occurring every six months.  I still have not spent the time and effort to understand the REGEX() and other REG-type functions.  The new interfaces and ODBC connectors, Python and R, even EXECUTE (A return of the RUN functionality) are challenging me to keep abreast with the additional power and flexibility of the AN product.

In addition, if you throw in AX and GRC – with working papers, exception reporting and monitoring, storyboarding, etc. – I am hopelessly behind in my use of ACL.  Fortunately, I have retired – well at least semi-retired – and can rest on my laurels.  I only take jobs that require knowledge of AN and in areas like REGEX() I can post to the Peer Community and get instantaneous help from around the world.

In 2001, I published a book called the “Fraud Toolkit” which I updated in 2009 under the title “Fraud Analysis Techniques Using ACL”.  The books included more than 50 scripts to search for fraud and a self-study course on scripting with data and exercises.  At the time, certain functionalities did not exist with the ACL command set, so I built scripts to perform the desired analysis.  My scripts included: Crosstab, Bendford Analysis, Frequently Occurring Numeric Values, and Min/Max Ratio analysis.  Since then ACL has added these functionalities.  Crosstab – added 2010; Benford analysis – added 2010; Frequency of numeric value (Classify on a Number) – added 2015; and Min/Max ratio – added 2015. (Note: some of the dates may be incorrect – hard to remember exact versions when ACL commands were added, but it was years after the toolkit was developed).  Many of you are aware of Crosstabs and Bendford commands in ACL, but you may not have recognized the addition of the Min/Max ratio.

The Min/Max ratio was accomplished by a script that determined the minimum, maximum and average value for any unique value of a character field.  For example, the minimum, maximum and average value of credit card transactions for each credit card number.  The script was not particularly complicated, but it did involve the use of GROUP-END logic.  Recently (2015) ACL added the ability to find the Minimum, Maximum and average values when you perform Classify, Summarize and Stratify.

This is the functionality I would like discuss, with examples, today.  More than 15 years ago I thought it was a useful analysis and devoted hours to developing my Min/Max script (I also had a Max/Max2 script).  Now it is available to all users, but like me, you might not be aware of the functionality of all enhancements.

In my first example, I will look at invoices paid to vendors.  In my case, the file I was working with was fairly small – less than 18,000 invoices paid to 6,000 different vendors.  As in the past, I performed various analyses such as STATISTICS to find the min, max and average invoice amount; STRATIFY to look atthe ranges of invoice amounts; SUMMARIZE to identify the vendors with the largest number and total dollars of invoices; and of course DUPLICATES.  All of these commands have provided enormous value and help me to understand our business and the A/P process.  For example, STRATIFY showed me that 63% of the invoices were for amounts less than $500.00 and totaled less 0.5% of the total amount.

Stratify on Amount

This caused me to question the efficiency of our p-card program for small dollar purchases (e.g. under $500).  It also show me that a small number (59) of invoices were for large amount (>$1M) and accounted for 83.47% of the total dollars – guess which invoices we were going to review?

But the addition of the Statistic option in the CLASSIFY command provided insight into the 6,000 vendors.  This was something that was not available before, unless you had my Min/Max ratio analysis script.

Classify on Vendor with Statistics Option

There were 13 vendors where the total number of invoices was greater than 10 and the amount of the maximum invoice was more than 75% of the total amount of invoices.  In other words, one invoice was significantly larger than the other invoices for the same vendor.

Looking at the second vendor (#241380), the largest invoice amount is $76,550.00; the average amount is $4,759.63: and the maximum invoice is 89% of the total for that vendor and more than 16 times the average amount.  Upon review we determined that the actual invoice amount was $765.50 and the amount had been entered incorrectly.  The same was true for vendor #301549 with a maximum amount of $16,975.00 (78% of the total) which should have been 1,697.50.

Vendor #333270 was a different problem – the $94,333.09 was actually owed to vendor #333207.  When this vendor complained that they had not been paid, we paid them with a late payment interest penalty.  The system did not recognize it as a duplicate because the vendor numbers were different.

The three vendor results below represent another type of problem.  Either we paid the wrong vendor or paid the wrong amount – but the vendor returned the check.

 

In each of the above cases, you can see that the minimum ABS(minimum amount) = maximum amount e.g. Vendor #276053 had a minimum of ($103,000.00) and a maximum of $103,000.00.  The vendor was not owed $103,000.00 and returned the check.

I hope that you can see the value of the Man/Max functionality.  But just to make sure, let me give you some additional examples of where I have used it:

  • Payroll – Classify on Job Category Subtotal Amount – identified people in job categories that were being paid incorrect (inflated) amounts (payroll fraud)
  • Medical Claims – Classify on Procedure Code Subtotal Amount – identified doctors who were charging 2-3 times the normal amount for medical procedures (medical claim fraud)
  • P-Card – Classify on Card Number Subtotal Amount – identified cardholders with a single large transaction (cards had been compromised)
  • P-Card – Classify on cardholder position Subtotal Amount – identified unusual transactions for admin clerks (a clerk was using the p-card for personal purchases)
  • Payroll Overtime – Classify on Job Category Subtotal Amount – identified categories where employee were not entitle to, but were receiving, overtime and individuals who were making false overtime claims
  • Contracting – Classify on Contract Type Subtotal Amount – identified sole source contract that should have been competitive
  • Contracting – Classify on Supplier Subtotal Amount – identified cases were certain suppliers received large contracts at year-end – bypassing procurement controls
  • Journal Vouchers – Classify on Period Subtotal JV Amount – identified abnormal JV’s just prior to production of quarterly financial statements.
  • Vacation – Classify on Employee Type Subtotal Vacation Days – identified an employee who coded 40 hours of vacation as 5 (i.e. days) and the system only subtract 5 hours from his vacation balance. Also identified people who were claiming large amounts of vacation.  System controls were not working effectively and employee could take more vacation days than they had in their balance (i.e. ended with a negative balance).

So, even though I had not kept up worth the enhancements offered by the more recent versions of ACL, I encourage you to do so; and I hope you can see the benefit of the Statistics option on Classify and/or Summarize.  I also hope that you post examples of where you have used this command parameter.

Extract Records Matching Criteria in another File

A common problem that presents itself when doing analysis is the need to extract detail records that match a list of values (criteria) in another file. This can range from simple to more complicated, primarily depending on the nature of the match.  The following describes three approaches, based on the objective of the analysis.

Simple question:

A simple question of this type is where we have a file with detailed transactions that we want to compare to a file that has a list of values (e.g. Payroll).  If we have a file of all pay transactions (details) and a list of employees (Employee file) who should be paid (criteria) and we want to know if every pay transaction has a matching employee record, then you can do a JOIN or RELATE with the detailed payroll transactions as your Primary (or Parent) file.  ACL allows you to do a many-to-one JOIN or RELATE.  This will give you all of the detailed payroll transactions where there is a matching employee (Matched Primary); or you can get all of the payroll transactions where there is no matching employee (Unmatched Primary).  Finally, you can also obtain both matched and unmatched (All Primary) – which is what the RELATE will give you as well.

If your results contains both match and unmatched pay record, to determine whether you have a matched record or an unmatched record in your results, simply apply a filter.

First to identify the matched records: this is the case where the employee number from payroll details (EmpNo) has a match with an employee number (EmpNum) from the employee file.

  • In JOIN it would be the EmpNo is the same as the EmpNum – filter: Empno = EmpNum
  • In a RELATE it would be similar, but since the Parent and Child files are physically separate, we need to include the table name in the filter for child fields – filter: EmpNo = Employee.EmpNum

Next to identify the unmatched records: this is the case where the employee number from payroll details (EmpNo) does not have a match with an employee number (EmpNum) from the employee file.

  • In JOIN it would be the EmpNo does not have a match with an Empnum – filter: Empno <> EmpNum
  • In a RELATE it would be similar, but again we need to include the table name in the filter for child fields – filter: EmpNo <> Employee.EmpNum

More Complicated Question:

The previous example is a many-to-one match.  What if the Secondary or Child file has duplicates records (records with the same key value).   For example, the Employee file had not only the employee number of everyone who should be paid, but also the rates of pay for different periods of time (From-To dates).  So an employee could have multiple records if their rate of pay changed.

The employee file might look like this:
Empnum        Pay rate           From Date          To Date
12345             1,200.00              Jan 1, 2016           Jul  31, 2016
12345             1,225.00              Aug 1, 2016          Dec 31, 2016
12345             1,250.00              Jan 1, 2017           Dec 31, 2017

The pay file might look like this:
   EmpNo                 Pay Amount               Pay Date
12345                    $1,200.00                      Jul 15, 2016
12345                    $1,225.00                      Aug 15, 2016
12345                    $1,225.00                      Sep 15, 2016
12345                    $1,250.00                      Oct 15, 2016
12345                    $1,250.00                      Nov 15, 2016
12345                    $1,250.00                      Dec 15, 2016

The objective might be to find not only people who were paid and should not have been (as in the previous example), but also people who were not paid the correct amount.  This means we have to match the pay details not only with the correct employee record (EmpNo = EmpNum) but also the pay period with the correct From-To dates.  That means that were have a many-to-many match.  We cannot use RELATE since it can only perform a many-to-one match.  JOIN, however, does have a many-to-many option.

So we would perform a JOIN where the pay transaction detail file is our Primary file and the Employee file (with the pay periods and rates) is our SECONDARY file.  A possible issue with a many-to-many JOIN is that the resulting file can be quite large.  For example, If I had 10,000 employees each with 26 pay records (260,000 detailed records); and an employee file where each employee had at least 10 sets of pay rates (10,000  * 10 = 100,000 employee records) and performed a JOIN many-to-many, the resulting file would have 2,600,000 records (260,000  pay records * 10 pay rate records per employee).

You have two options: perform the JOIN and then filter the results to find pay records where the Pay Date falls within the From-To Dates; or apply the filter to the JOIN command.  If the resulting file is not going to be overly large, I suggest the first option: perform the JOIN to a temporary file; apply the filter; and EXTRACT to your final results file.  However, you can also apply the filter directly to the JOIN command.  In either case, the filter will isolate the detailed pay transactions for each employee that has a matching record in the Employee file AND the Pay Date falls within the From-To Dates.

Temporary file: Filter IF BETWEEN(Pay_Date, From_Date, To_Date); and then EXTRACT to the final results.

Filter applied to JOIN command: JOIN MANY PKey EmpNo SKey EmpNum IF BETWEEN(Pay_Date, Employee.From_Date, Employee.To_Date) – note: you must include the table name for the secondary fields.

With either approach, the results will contain pay records where the matching pay rate is included.  You can now filter Pay_Amount <> Pay_Rate to identify people who were paid an incorrect amount based on the pay day and pay rate for that period.

Even More Complicated Question:

In essence, the “more complicated question” was still fairly simple because we were performing the JOIN on field that equalled the value in the Secondary file (i.e. where the key fields were equal: EmpNo = EmpNum).  It becomes “even more complicated” when the two files do not have key fields that match.  For example, I want to identify all detailed transactions where the description field contains one or more of list of keywords in my exceptions file (e.g. all P-Card transactions where the Merchant name contains a keyword such a bar, hotel, spa, casino, etc.).  I can’t perform a JOIN where Merchant Name equals the keyword value because they are not identical (“Dave’s Bar and Funspot” won’t match with the keyword “Bar”).

Instead I need to EXTRACT the records where the merchant name contains a keyword.  To do this manually, I would:

  1. Open the Keyword file
  2. Find the first keyword value (“Bar”)
  3. Open the P-Card transaction file
  4. Apply the filter FIND(“Bar”, Merchant_Name)
  5. EXTRACT all the records that meet the filter to a Results file
  6. Open the keyword file
  7. Get the next keyword value (“Hotel”)
  8. Open the P-Card transaction file
  9. Apply the filter FIND(“Hotel”, Merchant_Name)
  10. EXTRACT all the records that meet the filter to a Results file
  11. Continue until all keyword values have been selected and the filter and extract performed

Simple if you have a short list of keywords, but what if you have 1,000 keywords?  Why not have ACL do the keyword value selection, apply the filter and perform the extract – over and over again until all keywords have been applied?

The most common approach is to use a main script that calls a subscript which applies a filter and performs an extract of the records that meet the filter.  So if I have four keywords (bar, hotel, spa, and casino), the main script will call the subscript four times.  The first time the subscript will apply the filter FIND(“Bar”, Merchant_Name) and EXTRACT all the records that meet the filter to a Results file.  It then gets the next keyword value (“Hotel”) and calls the subscript again, applies filter FIND(“Hotel”, Merchant_Name) and EXTRACTs all the records that meet the filter to a Results file.  It continues to get keyword values until all the keyword values have been selected and the EXTRACTs performed.

Question #1: How many times does it call the subscript? Answer: as many times as there are keyword values – which can be determined by performing a COUNT on the keyword file.

Question #2: How does it get the 2nd, 3rd, and 4th keyword? Answer: by using a counter (v_Rec_ctr) and locating the record that equals the counter (LOCATE RECORD v_Rec_ctr).  The Counter is incremented each time the subscript is run (v_Rec_ctr = v_Rec_ctr + 1).  Assuming the P-Card file is called PCard_Trans and the Keyword file is called Key_Words which contains a field called Keyword_value (i.e. has “Bar”, Hotel”, etc.), the script looks like this:

***** Main script Extract_PCard Recs1 *****

COMMENT **** close all tables and delete previous results *************
CLOSE PRIMARY SECONDARY
DELETE PCard_Results.FIL OK
DELETE FORMAT PCard_Results OK

OPEN Key_Words

COUNT – counts the number of records in the Key_Words file (i.e. number of keyword values)
v_Num_Keywords = COUNT1   – the number of times to run the subscript
v_Rec_ctr = 1   – this tells me which record in the Key_words file to read
v_keyword = BLANKS(30) – initializes variable to store the keyword value from the Key_words file – make sure you have enough blanks to hold the longest keyword value

DO Extract_PCard_Recs2 WHILE v_Rec_ctr <= v_Num_Keywords – executes the subscript until all keywords have been read.

OPEN PCard_Results – open final results file after subscript has finished

****** Script Extract_PCard Recs2 ********

OPEN Key_Words

LOCATE RECORD v_Rec_Ctr – positions ACL at the next Key_Words record (rec 1 the first time it is run)

v_Keyword = Keyword_Value – sets v_keyword to the value in Keyword_value (e.g. v_keyword = “Bar”)

OPEN Pcard_Trans

SET FILTER FIND(ALLTRIM(“%v_keyword%”), Merchant_name) –  (e.g. FIND(“Bar”, Merchant_name)

EXTRACT FIELDS ALL v_keyword as “Keyword” to PCard_Results APPEND – extract records meeting filter, plus a filed which contain the keyword value that caused the record to be extracted

V_Rec_ctr = v_Rec_Ctr + 1 – increments the record counter to get the next record in the Key_Words file.

Another option would be to combine your Keyword value values into a string (field) and use this field in your transaction file.  However, it depends on the length of each Keyword value – will all the keyword values fit into 10,000 characters when you include a “;” separator?

SET SAFETY OFF
OPEN Key_words
COUNT
V_Max_Ctr = COUNT1
v_Keyword_list = Blanks(10000)
v_found = F
v_ctr = 1

GROUP
v_keyword_list = ALLTRIM(v_keyword_list) + ALLTRIM(Keyword_Value) + “;”
END

OPEN PCard_Trans

GROUP
LOOP WHILE v_ctr <= v_Max_Ctr AND v_found = F
v_found = T IF FIND(SPLIT(v_Keyword_list, “;”, v_ctr), Merchant)
EXTRACT FIELDS ALL SPLIT(v_Keyword_list, “;”, v_ctr) AS “Match_Code” IF v_Found = T to PCard_RESULTS
v_ctr = v_ctr + 1
END
v_ctr = 1
v_found = F
END

OPEN PCard_Results

SET SAFETY ON

**************

Other common examples of this type of analysis are:

  • Amount charged compared to master file with products and different prices at different times
  • Amount paid compared to Contract file – with products and different prices

Basically anything where you want to compare detailed transactions with a historical record of that includes changes (e.g. different prices for specific time periods).

There are others ways to accomplish the analyses discussed above, but in every case it must start with a good understanding of what you want to accomplish.  Such as:

  • Do I want to see all the records or just those that don’t match?
  • Do I have duplicates in the Secondary file or not?

I hope this has helped you to understand three ways to extract records meeting criteria that are contained in a second file.

 

The Data Analytics Conundrum

Studies after study have shown that data analytics is more effective and efficient at detecting risk, and identifying control weaknesses, non-compliance, and inefficient business processes.  Chief Audit Executives (CAEs) have repeated stated that data analysis expertise is a much needed skill in internal audit, and IIA surveys of software over the past 10-15 years have rated data extraction, data analysis and analytical software as critical tools for effective audit organizations.   Why then do more than half of the internal audit organizations still rate their analytic capability as poor or needing improvement?

I have been an internal auditor for 30 years and have been a user and advocate of analytics for 28 of those.  (It was during the first two years of auditing that I realized I could do a better job by analyzing data.)  I have been asked hundreds of times, “How can we develop and maintain an analytics capability?”  Too often CAEs give up without even trying (“we are doing good audits now, why change things?”; or only make a feeble attempt at it (“let’s get a programmer right out of college and have them develop analytics for us.”)

The reality is change is difficult.  As auditors we are constantly making recommendations to help others improve, change, do more, etc. but we ourselves stick with traditional auditing tools and techniques.  Perhaps what is needed is a taste of our own medicine.

In order to successfully implement analytics and integrate data analysis in the audit process you must have a formal development and implementation plan.  The plan must address the need for sufficient people (appropriate level and number), technology (primarily software) and processes (you need to change the way you currently perform audits).  It must also have a project manager who will be held accountable for delivery on the plan, clear objectives, milestones, and a reporting requirement – to the CAE and/or the audit committee (but only if they fully support the adoption of analytics).

Analytics also requires an understanding of the business processes, the data and supporting them, and a solid grasp of internal auditing processes and requirements (e.g. application of the IIA standards).  None of these will be provided by junior level audit or programming resources.  Rarely will all these skills exist within one individual; and they might not already exist in your audit organization.  Rather than being an impediment, this should be seen as an opportunity: an opportunity to obtain the right resources and task them with a clear objective.  If you are lucky and have the appropriate type of resources in your organization – this is ideal.  Existing resources should already know the business processes and have the internal audit skills, and perhaps have some analytical capabilities.  However, they will need to be supported by training and software, and given sufficient time to develop the skills and implement the functionality.  Most importantly, they will need to be dedicated to analytics.  Otherwise you end up pulling valuable resources away from other priorities and tasking them with something in addition to what they are already doing; or settle for a subset of the required skills.  In either case, it is a recipe for failure.

A statement I hear often is, “We are a small audit organization, and we can’t afford to dedicate a person to analytics”.  It is usually used as a rationale for not using data analytics.  My response is something along the lines of “Because you are small does that mean you can afford to be less efficient and effective?”  The reality is, unless you are using analytics, you are not addressing risk, testing controls, examining compliance and improving business operations to the extent that you could be.  If you are going to decide not to use data analytics, at least make it an informed decision.  Examine the costs and benefits and then decide.  Don’t simply look at your existing resources, which are most likely being used to the maximum, and decide that you can’t take on anything else.  It is not a question of doing more with the same resources.  Ask yourself if there are things that you don’t need to be doing or if they are better ways to do what you need to do.  Also look at what you are not doing and determine the value-added if you could do those things.  Then decide if you can afford not to be using data analytics.

I also get asked about audit software, “which package should I use?”  This is something that should be decided based on your requirements and your short- and long-term plans for analytics.  I encourage you to fully utilize the existing capabilities such as standard reports and you can definitely start with Excel, but don’t be limited by what you have – think about what you need.  Find out what other audit organizations are using.  For more than 10-15 years, the IIA magazine, Internal Auditor, conducted a survey of software usage and the results were printed in the August issue.  The results have consistently shown that ACL is the most used audit software for data extraction, data analysis, and fraud prevention and detection in the world and has been for almost 15 years.  It is the software I use, so I may be biased, but just because you are biased doesn’t mean you are wrong.

In conclusion, you should be using data analysis.  You will need to plan and manage your adoption of analytics.  It will take time, resources, and technology.  It has to be integrated in the audit process (planning, conduct and reporting) and developed with an understanding of the business processes and the underlying data.  It is easy to do wrong, but worth doing right.

Why did I title this “The Data Analysis Conundrum”?  Because I don’t understand why we are still talking about the “Why, How, and What” of data analytics and not simply getting on with the job.  Stop asking questions about analytics – get off the fence and actively pursue it.  The successful implementation of analytics will add significant value to the internal audit function and your ability to support the goals and objectives of senior management.

Year 31 – Incomplete Property Tax Data

Even though I am retired, I am still getting to do interesting analysis – perhaps even more so because I get to pick and chose want I want to do.  In this case, I was asked to assist with an audit being performed by a municipal audit function.  A large city had an internal audit group that was interested in auditing revenue – primarily from property taxes.  One of the key objectives was to determine the completeness of the property tax data (i.e. were all properties included in the property tax data base?).

I suggested examining other external data sources including: water meter, building permit, and hydro data.  The city was not responsible for issuing and collecting on hydro bills so only the first two sources were available – but even that was a good start.

Using water meter data we were able to identify all addresses with water meters and calculate the water usage.  We had to be sure to use the correct information because the data also had billing address which could be different from the address where the meter was located.  The data also had the meter install date and the first and last date the meter was read.

The property tax data also had a billing address and the physical – building location – address.  But, before we could compare the property tax data with the water meter data we had to normalize the address fields.  Even the address entries with each system were inconsistent, with variations such as: “St.”  or “St” or “Street”;  and “Unit 12 123 Main St” or “Unit #12 123 Main St” or “12 – 123 Main St” or “123 Main St Unit 12” or “123 Main St Unit #12”.

In the end we were able to match the water meter data with the property tax data to identify properties with buildings that were not paying property tax.  While there were only a couple of hundred properties identified this still represented a total of over $1M in missing revenue.

The next source of data to test the completeness of the property tax information was the building permits.  Building permits were issued for new building and for major renovations on existing building.  The data including the address, the permit issue date, the inspection date, and status (e.g. completed).  Again, the only common field was the address so we had to clean the data before doing the comparison.

Using the permit and property tax data we were able to identify new and existing buildings that were not paying property tax.

The next risk that we decided to address, after looking at the data, was the timeliness of the property tax data.  We had notice significant delays in the time between the issuance of the permits and the inspection – sometimes several years.  There were also long periods between the water meter install and the date of the first reading.  We also noticed inspection and water meter reading dates that were years into the future.

The auditors determined that some of the dates were important and had a bearing on the amount of property tax paid in the year.  For example if a building was not completed until July 1st, then property tax was only 50% of the yearly amount.  The start date for tax calculation was the date of the final inspection.  Delays in the inspection meant less property tax and dates into the future meant no property tax was collected.

Again, there were only a few hundred issues with dates, but the results included one case of fraud and hundreds of thousands in lost revenue.

In the end the audit identified issuing around completeness of the property tax data, resulting in loss revenue, and made recommendations that would integrate the separate data sources.  We also identified data entry control weaknesses with respect to key dates and made recommendations which include: range checks and aging reports.

ACL Commands: FILTER, EXPRESSIONS, DUPLICATES, AGE, STATISTICS, INDEX and RELATE

Lessons-Learned: even though the audit had a very specific objective – assess the controls ensuring the completeness of the property tax data – there was a multi-phased approach.  During the initial planning phase the idea of testing completeness by comparing property tax data to other data sources led to a data analysis approach that had not initially been considered.  Despite challenges finding fields to use for the Relate command, perseverance and use of scripts from ACL scriptub such as ”Address Clean” made the task easier.

Another important lesson was the examination of the data in the meter and building permit data prior to performing the Relate in ACL.  It is important to assess the integrity and to understand the data before performing the Relate.  Had we simply performed a Relate using the address field and not examined the data first, we would not have identified issues with respect to the date fields.

When using and source data, even if only to test completeness of another system, you should analyze the source data.  High-level summaries such as Classify, and simple Aging and Statistics on date fields can be very informative and may identify additional risks to be addressed during the audit.  Not only did we identify control weaknesses around key date fields – and a fraud involving a deliberate future dating of inspection data in exchange for kickbacks – but the high-level summaries identified duplicate records which had to be addressed before we could perform a proper Relate.

Simply having access to data provides an opportunity to assess risks and controls in the system.  My consulting contract was over, so I am not sure what happened after I left, but I image that the auditors will be examining the building permit and water meter processes in future audits.  Based on our limited review of the data, there are control weaknesses and fraud risks in these business processes that should be examined.

Note: In thinking back on the analysis performed, I wish I had performed an additional analysis examining unit and apartment numbers for addresses with multiple units.  Units or apartments that are not in the property tax data (e.g. Apt 101, 102, 103, 104, 105, 201, 202, 203, 205  – possibly missing Apt 204; or Unit 10, 12, 18 – possibly missing 14 and 16) could be instances of incomplete data.  I do realize that this would likely involve more coding and manual follow-up to eliminate the false positive, but it could be worthwhile.

Adding Value to Compliance Audits – part2

The following posts is part 2 of “Adding Value to Compliance Audits”

Given a good understanding of the current level and sources of risk, the next step is to look at the requirement for, and the adequacy and effectiveness of, the control to mitigate the risk.  This requires an understanding of the cause and source of the risk and the operation of the control.  Is the control still required?  Does the current control address the root cause?   Are there better ways to mitigate the risk?  By answering these questions the audit may identify unnecessary controls, ineffective controls, or identify better controls to address the current risk.  All of which may reduce the cost of compliance, while improving risk mitigation.  Recommendations such as automating a control can save time and effort and been seen as a real value add.

The next step would be to verify that the control activities are being performed (i.e. compliance).  However, you are not done yet.  If you find non-compliance it is still not sufficient to recommend “Do A”.    Audit recommendations should address the root cause.  Identifying a lack of compliance is not the same as determining why management is not complying (i.e. determining the cause).  Was management aware of the requirement? Are they capable of complying? Are there compensating controls that have been implemented?

In order to determine the cause of non-compliance asking “Why” (usually several times) is often sufficient to determine the cause.  You should also determine the impact of non-compliance.  Then instead of “Do A” audit can provide a rationale and make a recommendation that actually assists management in complying.

The next step is to ensure that you are doing the audit right – this means maximizing your use of all your resources available to you, including analytics.   Data analytics can be defined as the application of analysis techniques to understand business processes, to identify and assess risks, to test controls, to assess efficiency and effectiveness, and to prevent, detect and investigate fraud.  Data analytics techniques, ranging from simple categorization and stratification to sophisticated predictive and prescriptive models, can assist organizations in focusing their risk responses on the areas in which there is a higher risk – including compliance risk.

Existing levels of risk can be assessed and trends identified to determine if the risk is increasing or decreasing.  For example, environmental compliance could examine spills (number and quantity), cleanup costs, and lawsuits (quantity and value); production compliance could examine material, personnel, maintenance and operational costs.  By examining measures over several months or years, a trend line can be produced to assess the effectiveness of mitigation efforts and identify emerging risks.

Rather than relying solely on substantive tests, the effectiveness of controls can also be tested with analytics.  In addition, you can look at trends that will have positive or negative effects on compliance.  For example, environmental compliance can examine the control over the purchasing of hazardous materials – ensuring that the purchase quantities match requirements – thereby avoiding environmental compliance issues around disposal.   Compliance with hiring practices could review staffing methods, staffing rates (by gender, by race, etc) to ensure proper procedures are being followed and address employment equity requirements before they become non-compliance issues.

Sometimes compliance with a poor control can increase risk and dysfunctional behaviour; and cultural issues can make enterprise-wide compliance difficult for global companies and increase risk.  Doing the right compliance audit – not simply “did we do A” and doing it efficiently and effectively can result in significant value to the organization and remove the “got ya” stigma of compliance audits.  However, it requires auditors to re-look at the compliance-related risk and controls and use analytics.

Richard Chambers, President of the IIA, identified his top ten imperatives of the decade which highlight the challenges that auditors must face to provide value to management.  These include enhancing proficiency with data mining and analytics; providing assurance on risk management effectiveness; and enhancing and leveraging a continuous focus on risk.   These challenges can be applied to all types of audits from compliance to operational.  He encouraged auditors to look at business objectives, risks to the achievement of objectives and design audits that provide assurance over the governance, risk management and control/compliance frameworks put in place by management.  A compliance audit should not be any different: it should identify and assess risk; and examine the effectiveness and efficiency on the controls to mitigate the risk.   By doing so, it will add-value to the company as well as provide assurance to senior management.

Accounts Payable Example

In an Accounts Payable audit there was a requirement to formally authorize invoices for payment by stamping and signing the original invoices.  The stamp and approval verified that goods/services had been received in accordance with the contract and that the invoice could be paid.  Falsifying this approval had serious legal repercussions – including up to 5 years imprisonment.

The audit covered numerous accounts payable offices spread across the globe.  As a part of the audit we verified that invoices had been properly approved i.e. stamped and signed by the authorized approval authority.  At several locations we noted that the invoices were not being properly authorized (stamped and signed).  But the reasons for non-compliance differed.  In one small office (AP1) they were unaware of the requirement.  We identified an underlying problem with corporate communication of financial regulations, including a lack of translated procedures.  In another office (AP2), they had been told by the legal department that the stamp that was being used did not contain the appropriate wording and they should immediately stop using the stamp and obtain the official corporate stamp with the correct wording.  The local A/P manager had been trying for months to obtain an official corporate stamp – he even showed us numerous emails – to no avail.  At another location (AP3) they had converted to electronic invoices and authorization – so they were no longer stamping and signing invoices.

A compliance audit that did not ask “why” might easily have issued the simple recommendation “stamp and sign all invoices” – adding zero value to the A/P process.  Adding value to this compliance audit would have had very different recommendations.

Starting with the risk: The control was put in place to ensure that we were not paying for goods/services we did not receive; and that goods/services were of the quality, quantity, and price agreed to in the contract.  Given the nature of decentralized contracting, the risk still existed and a control was required.

The second step would have been to determine if the control was effective and efficient.  At AP1, the control was not working because of a problem in the corporate communication area – we had acquired a new overseas operation and regulations had not been translated.  This required a different recommendation; one that would address the root cause – corporate communication – and did not penalize the local A/P manager.

At AP2 non-compliance was attributed to a breakdown between the legal and finance departments.  Legal was reviewing all official stamps and finance was responsible for updating, revising and supplying them.  Unfortunately, the two departments were not coordinating their work and finance was unaware of the problem with the invoice authorization stamp.  This recommendation addressed the communication between departments.

At AP3, the physical stamping and signature of the invoice had been replaced by an automated approval.  Recommending compliance with the current regulation would be ludicrous.  However, the automated controls needed improvement to verify the authority of the person providing the electronic approval.  As a result, a recommendation was made to address the weakness in the automated control.

The result of the compliance audit improved the corporate communication processes, interdepartmental activities, and IT controls.  The recommendations were seen as having value – much more than “Stamp and sign all invoices” would have received.

In addition, the audit of the efficiency and effectiveness of the A/P process can benefit from the use of analytics.  The controls over approval can easily be done by matching the electronic approval with a database of approvers.  Examining actions by users can identify instances where separation of duties was not achieved.  Totals by payment terms or payment method can quickly highlight inefficient practices or even fraud.  The resulting recommendations can improve compliance and reduce business risks while adding value.

Adding Value to Compliance Audits – part 1

I have often been critical of compliance audits, but I recently realized that it is not the ‘compliance audit’ that bothers me, but the way it is done.  This led me to write the following thoughts.

It is difficult to argue that compliance audits are not an important internal audit product.  Done properly, they can protect a company from fines, penalties and even criminal charges.  For example, non- compliance with anti-money laundering legislation have recently had serious consequences.  A financial institution was fined $1.93B for failure to conduct basic money laundering due diligence in its operations in Mexico.  But it doesn’t stop there the institution is also being sued by families of murdered by Mexican drug cartels.  Non-compliance with environmental regulation has had significant monetary and non-monetary impacts on companies.  The Environmental Protection Agency’s enforcement actions include administrative, civil and criminal penalties.  And SEC has civil and criminal penalties for insider trading and other non-compliant activities.  Despite the seriousness of non-compliance, compliance audits are often not seen to be of value by many managers.  Possibly because these audits often look something like this:

  • Objective: Verify compliance with “A”
  • Criterion – you are supposed to do “A”
  • Condition – the audit found you were not doing “A”
  • Recommendation – Do “A”

And some auditors wonder why the client does not see any value in a compliance audit – particularly if they already knew that they were not complying with the requirements to do “A”.  These audits fail to identify the cause, and the impact, of the non-compliance.  Auditors need to do more – not only to ensure that compliance audits are providing real assurance to senior management – but also to be seen to be adding value.

There are two basic things you can do to add value to compliance audits: do the right audit; and do it right.  Doing the right audit mean examining why there is a compliance requirement in the first place.  Typical it is for legal, regulatory or operational reasons.  But behind the simple compliance “you must do A” there is a risk that was deemed serious enough for management or regulatory/legal authorities to put in a compliance requirement.  Now you are auditing compliance with that requirement – perhaps because there is a mandatory requirement to verify compliance on a regular basis.  However, risk shifts quickly in an economy where “speed of change” is a critical success factor of business, and it morphs rapidly in a world where globalization and automation results in shifts in strategic and operational initiatives of global enterprises.  Yesterday’s risk and compliance requirements are not always the same as those of today.  Changing risks and compliance requirements can affect not only the need for the compliance controls but also their adequacy.

The Institute of Internal Auditors’ (IIA) “Three Lines of Defense in Effective Risk Management and Control” model specifically addresses the “who and what” of risk management and control. The overarching context of the model acknowledges the broader role of organizational governance and governing bodies.  The model encourages auditors to expand their role to include risk and compliance.   In addition, it is not enough that the various risk and control functions exist — each must have a well-defined role and their efforts should be coordinated to avoid duplication and gaps in controls.  As a result, it is not uncommon to find teams of internal auditors, enterprise risk management specialists, compliance officers, internal control specialists, quality inspectors, fraud investigators, and other risk and control professionals working in concert to help their organizations manage risk.

Senior management and governing bodies collectively have responsibility and accountability for setting the organization’s objectives, defining strategies to achieve those objectives, and establishing governance structures and processes to best manage the risks in accomplishing those objectives.  The second line includes risk, compliance, financial controls, IT functions that oversee risk.  While the compliance function monitors various specific risks such as noncompliance with applicable laws and regulations; internal audit provides the independent assessment over risk – the third line of defense.

If you are a manufacturing plant, there are probably numerous environmental regulations that you must comply with; and publicly traded companies you will have SOX and other financial and legal rules and regulations.  Virtually every company will have a set of policies and procedures that must be followed to protect it from lawsuits, prosecution, reputational and other risks.  These are the areas where compliance requirements will be established and where audit will perform compliance audits.

Internal audit provides assurance on the effectiveness of governance, risk management, and internal controls (GRC), including the manner in which the first and second lines of defense achieve risk management and control objectives.  The scope of this assurance covers a broad range of objectives, including compliance with laws, regulations, policies, procedures, and contracts. (IIA Position Paper: Three Lines of Defense in Effective Risk Management and Control (June 2013)).  But it should not be “compliance simply for compliance sake”.  Audit should be mindful of the overarching business objective and the controls that are put in place to help mitigate risk to the achievement of the objective – even when examining controls with compliance requirements.

Deconstructing the top level strategy into key goals/objectives will help you to identify the enterprise-level risks that threaten the achievement of those goals; the process-level control objectives that mitigate those enterprise risks; the process-level risks; and the controls that mitigate the process level risks.  The compliance activities will likely be closely related to these process-level risks and controls and these are the risks and controls that should be assessed.

The risk tolerance around an activity subject to compliance requirements may be closer to zero than other activities of the organization.  However, transforming a compliance audit into a value added activity still starts with the determination of the audit objective.  This sets out in clear terms, what the audit seeks to accomplish and drives the scope, criteria, work plan and final results.  If the audit objective is simply to verify compliance with “A”, then you will fall into the trap of concluding “You are not doing A” and recommending “Do A”.  However if the audit considers the compliance–related business objective and the associated risk; and has as an objective such as: to verify the need for, existence and adequacy of compliance with “A”, it will be better positioned to address the governance, risk management and compliance issues as well as to add value.

Given this type of audit objective, one of the first steps would be to perform a risk assessment to determine if the original risks and compliance requirements still exist.  They may have been eliminated by a change in operations e.g. we are no longer making that product; or we are no longer using that manufacturing process.  It may have been transferred to someone else – we subcontracted out the operation – or business process re-engineering, changes in location, retooling, are just a few of many possible reasons why the original risk and associated need for compliance may have been eliminated, transferred or lessen.  In these cases, the value-add might be the elimination of the requirement to comply:  no risk – no compliance requirement.

****** more next week *******

Year 30 + P-card fraud

I didn’t realize how quickly it would take to get to 30 years when posting one blog per week for each year (30 weeks).  Even drawing some of the posts out to two weeks didn’t add much.  So now I am posting additional analysis performed over the years.  Another thing I didn’t take into account was that I would continue to perform analysis – even after I retired.  So I will likely have enough to continue to post – maybe not every week since I am trying to slow done a bit.

I have often said that I never performed the same audit twice.  This is not entirely true.  I certainly have perform Accounts payable and Payroll more than once, but for different organizations.  I have also done a variety of audits around contracting and construction or major capital projects.  But I have never implemented the same audit program twice.  There was always new risks, additional concerns, and different analysis to be done.  This has made every audit a unique challenge.

The audit that comes closest to be repetitive is p-cards.  I first mentioned this in my Year 2000 post which described a standard set of analysis I performed to find misuse, abuse and fraud in p-card charges.  It started because I was tasked to assist the USA IG with some complicated analysis looking at totals by cardholder within any 5 day period.  The audit of p-cards continued with my own company and the standard analysis scripts I have developed have been used over and over again in various organizations.

Perhaps not coincidentally, I was asked to develop and analysis program for p-cards again a couple of weeks ago.  Many of the tests were the same as I had performed numerous times.  The usual risks such as split transactions to avoid financial limits, duplicates to detect merchant fraud; personal expenditures charged to corporate p-cards, etc.  But this time the organization actually had a list of prohibited Merchant Category Codes (MCC) that could verify by individual cardholder.  This particular test was made more difficult because the list of prohibited MCCs was formatted like “4511, 3351-3499, 7512, 3500-3999, 7011, 4814-4815, 5541-5542, 7523-7524, 4112, 4817-4821”. While I did develop a script that expanded “3351-3499” into “3351, 3352, 3353, …. 3499” using nested loops, I thought there might be an easier way so I posted the question on the ACL Peer Community (aka User Forum).  One of the regulars, Thomas Larson, posted a much easier script that used BETWEEN() when there was a range such as “3351-3499”, and a FIND() when it was a single MCC. However, this is not the point of my story.  My point is – once again I found misuse, abuse and possibly fraud (still needs to be reviewed and verified) in p-card transactions.

P-card were introduced because they are cheaper than invoices, and have additional controls both at the bank and at the organization level.  Why then do I consistently find issues with p-card transactions?  The single most contributory cause is management review of p-card transactions.  Employees who have been assigned a p-card are often asked to sign off an official looking form that says that they understand the rules around p-card use (basically, only to be used for business purposes that comply with policy).  As a second level of control, the employee’s manager (or a p-card manager) is tasked with reviewing their employee’s use of the p-card.  For some managers this can mean review 50-100 employees p-card usage which amounts to thousands of transactions.  Since many are small dollar, managers can be less than diligent, providing employees with the opportunity they need to commit fraud.

However, sometimes it is a higher level manager who commits the fraud.  For example, we had one such fraud in the city where I work. Finance director at charity organization charged personal expenses to her corporate credit card including: $78K in home furnishing and new appliances; $69K in groceries; $50K for gas and car repairs.  She covered the expenses for 8 years with transfers ranging from $663.03 to $40,500.00 from various accounts.  A simple review of transactions by MCC would have identified this fraud in the first few months.

Back in 2000, the office of the Inspector General in the US did a government-wide audit and identified the following control weaknesses – which based on audits I have been involved in – are also applicable in non-government companies.  These include:

  • Inadequate review of purchases by approving officials
  • Unmanageable span of control
  • Excessive number of cardholders
  • Exceeding authorized purchase limits
  • Lack of/inadequate documentation
  • Inappropriate purchase methods
  • Unrecorded accountable property
  • Lack of security over purchase card
  • Inadequate training for cardholders and approvers
  • Inappropriate financial coding
  • Inadequate reconciliation

I have seen numerous cases where:

  • The approving official’s review is the most essential element of the p-card control system. The approver should ensure purchases are appropriate and charges are accurate.  At the same time, the span of control can be quite large (1000’s of cardholders) making it difficult to perform adequate review
  • Cardholders have developed unique ways to get around purchase limits, including one I know of that had a consultant who was working for them – write a letter to the credit card company to get the limit to be raised – it was.
  • People confuse having a credit card with “authority to purchase” and are able to bypass purchasing controls
  • Items that are purchased are often not recorded in any corporate system – this includes computers and other expensive and attractive items
  • Cards are lost, stolen, misplaced and often not reported
  • Financial coding is often “general office supplies” even though many different items can be purchased and it is difficult to reconcile transactions.

ACL Commands: FILTERS, CLASSIFY, CROSSTAB, EXTRACT, and RELATE

Lessons Learned: the implementation of an improved system of controls (p-card versus accounts payable invoices) can still have serious weaknesses and must be assessed.  Also, when you are relying on managers to perform (manual) reviews of thousands of transactions, the likelihood of this being a good control is small.

In addition, control weaknesses in one company or one portion of a company, likely exist elsewhere.  When performing a fraud risk assessment be sure to look at what is happening in your own company and others.  Fraud schemes are often repeated whenever and wherever similar control weaknesses exist.

Lastly, despite close to 30 years of using ACL, I can and do ask for help.  Some of the users on the Peer Community have analytical skills that put mine to shame; and they offer them freely to those of us who ask for help.

Missing items

I have been away on vacation and now I have exams to mark and Christmas preparations to finish.  So, I must confess that these examples are fillers as I have been too busy to write much else these days.  However, I do still feel that they have value.

Sometimes fraud is detected through the identification of missing items or transactions; in other cases unexpected transaction are found, highlighting the fraud.  However, in some fraud investigations identifying what is not there can be as important as finding out what is there.  Missing accounts receivable payments, branch offices not reporting revenue or unrecorded receipts of goods are just a few of the many symptoms of fraud.  In a similar fashion, duplicate invoice payments or payroll check numbers may be a result of fraudulent activity.  The following two examples illustrate the utility of identifying missing items in a sequence or unexpected consecutive items.

Case Study: Free Calls

Employees were permitted to use their office phone to make long distance calls as long as they paid for the charges.  From the employees’ perspective, it was a good deal since the company had a special discounted rate with the telephone carrier.  From the company perspective, it was a good deal too.  Employees liked the arrangement and it increased the number of calls that allowed the company to negotiate a better long distance discount rate.  Every week Caroline in accounts payable reviewed the long distance bill and identified non-company numbers called.  The person, from whose phone the call was made, was asked to complete a ‘long distance call’ slip detailing the date and cost of the call.  Each quarter, the employees were required to submit the long distance call slips and reimburse the company for the ‘personal calls’ portion.

One month, William accidentally lost a long distance call slip and, as a result, only reimbursed the company for the slips submitted.  Latter, he found the missing slip, but did not disclose the error.  When nothing happened, he deliberately failed to include several long distance call slips from time to time.

At the end of the year, the auditors tested the controls over the reimbursement of long distance charges and noted that there was no control to ensure that the employees were paying for all the personal calls they actually made.

The auditors reviewed the reimbursement records using the Gaps command.  Since the long distance call slips were pre-numbered, the test easily identified 26 missing slips.  The results were presented to Caroline who matched the numbers from the missing slips to the carbon copies of the slips.  William was identified as the culprit for 25 of the missing slips.  When approached by the auditors, he admitted to neglecting to include all of the weekly slips, and was ordered to reimburse the company.  In accordance with the strict policy on fraud, he was fired.

Cash Study: Frequent Traveler

The assistant to the President was required to accompany the President on many of her frequent business trips across the country.  As a result, the auditors did not normally question the high travel expenditures of the President, or of her assistant.  However, they had received an anonymous tip that the assistant was committing fraud.

During the initial fraud investigation of the assistant’s hotel bills, they calculated her total travel costs by type, and noticed that she had significantly higher accommodation expenditures than the President.  The team leader was curious as to why this was the case, and instructed his team to review all transactions related to her hotel expenditures.  In particular, they sorted them by invoice number and used the Duplicates command to look for duplicate invoices.  They also used the Gaps command to examine the sequence of these expenses.  As expected, there were no duplicates and the Gaps command run on the invoice number revealed many gaps in the sequence of the hotel invoice numbers. This was not surprising since each hotel used its own invoice numbering system and had many clients, each with their own invoice.  What surprised the auditors was that the analysis showed 10 bills from one hotel that had no gaps between – they were in sequence 20311 to 20320.  This was the case even though the dates of the invoice spanned several months.

The auditors checked with the hotel and discovered that the assistant had stayed at the hotel on the dates in question.  However, the billing manager told them that the invoice numbers were not part of their invoice sequence and had not been issued by the hotel.  The auditors brought the results of their analysis to the attention of the President and received permission to question the assistant.  The assistant admitted to using her computer to scan in a real invoice from the hotel.  She then used this scanned invoice to make copies – thereby falsifying her travel claims.  She would inflate her hotel bill every time they stayed at that hotel, which was often since the President had a lot of business in that city and always stayed at that hotel.  She unwittingly had simply incremented the invoice number each time she generated a new invoice.  The Gaps command allowed the auditors to find the altered invoices and discover her scheme.

Lessons-learned: The ability to see indicators of fraud in the data includes not only seeing what is there (and is not expected) but also what is not there (and is expected to be).  Missing items can range from inventory tags, purchase orders, health claim forms and even transaction ids in ERP system (why are we missing 6 transactions?).  In other cases, you should have gaps – in fact you shouldn’t have consecutive items (e.g. invoice numbers, credit card numbers (anything with a check digit), and various types of receipts.

The ability of auditors and investigators to analyze data is enhanced when they can manipulate and work with the data.  Sometimes it is necessary to create new fields, not existing in the original file – such as Total value of inventory (quantity on hand * unit price).  ACL is ideal for this type of analysis – allowing new field to be created and then compared with existing fields or data in other files.  It can perform simple calculations (quantity * unit price) or conditional calculations (markup is 57% if product code is ‘01’; 68% if product code is ‘02’ and so on.).  Whether there are millions of records or only thousands – the analysis is fast and easy; and the results are often revealing.