GPRA: Weaknesses in the Service Center Correspondence Examination Process Reduce the Reliability of the Customer Satisfaction Survey

April 2001

Reference Number: 2001-10-067

 

This report has cleared the Treasury Inspector General for Tax Administration disclosure review process and information determined to be restricted from public release has been redacted from this document.

April 20, 2001

MEMORANDUM FOR COMMISSIONER ROSSOTTI

FROM: Pamela J. Gardiner /s/ Pamela J. Gardiner

Deputy Inspector General for Audit

SUBJECT: Final Audit Report - GPRA: Weaknesses in the Service Center Correspondence Examination Process Reduce the Reliability of the Customer Satisfaction Survey

The attached report presents the results of our review of the Internal Revenue Service’s (IRS) Service Center Correspondence Examination Customer Satisfaction Survey process.

In summary, we found that IRS management has not established an effective process to ensure that the Customer Satisfaction Survey is conducted appropriately to measure the level of satisfaction customers receive from interactions with all Correspondence Examination program areas. We recommended that the Director, Compliance (Wage and Investment Division) work to ensure among other things, that proper organizational, disposal and technique codes are used. Additionally, the Directors, Compliance and Organizational Performance Division, should work together to disclose survey response rates and study actions to increase the rates.

IRS management agreed to all of our recommendations except one. Management stated that the current use of a stratified sample with weighting factors for non-response and stratified imbalances is sufficient. We continue to believe that the large variation of cases processed in individual service centers must also be accounted for in the survey. Management’s comments have been incorporated into the report where appropriate, and the full text of their comments is included in Appendix VII.

Copies of this report are also being sent to the IRS managers who are affected by the report recommendations. Please contact me at (202) 622-6510 if you have questions, or your staff may call Maurice S. Moody, Associate Inspector General for Audit (Headquarters Operations and Exempt Organizations Programs), at (202) 622-8500.

Table of Contents

Executive Summary

Objective and Scope

Background

Results

Processing Errors Could Affect the Survey Results

Weaknesses in Approving Access, User Profiles, and Inventory Validations Could Affect the Survey Results

Low Survey Response Rate Could Affect the Survey Results

Sample Selection Is Not Always Representative and Random

Conclusion

Appendix I – Detailed Objective, Scope, and Methodology

Appendix II – Major Contributors to This Report

Appendix III – Report Distribution List

Appendix IV – Organization Codes

Appendix V – Disposal and Technique Codes

Appendix VI – Inventory Compensating Controls

Appendix VII –Management’s Response to the Draft Report

Executive Summary

This audit was performed as part of the Treasury Inspector General for Tax Administration’s overall strategy to assess the reliability of the Internal Revenue Service’s (IRS) customer satisfaction performance measures as they relate to the Government Performance and Results Act of 1993 (GPRA). The overall objective of our review was to assess the validity of the information used to measure customer satisfaction for the Service Center Correspondence Examination function.

The GPRA requires federal agencies to establish standards for measuring their performance and effectiveness. The law requires executive agencies to prepare multi-year strategic plans, annual performance plans, and performance reports on prior year accomplishments. The first annual performance reports were to be provided to the President and the Congress in March 2000. The Congress will use the GPRA measurement results to help evaluate the IRS’ budget appropriation. Therefore, it is essential that the IRS accurately measure its success in meeting the performance goals.

The IRS prepared a strategic plan and an annual plan establishing goals for the agency. One of the IRS’ three strategic goals is to provide quality service to each taxpayer. The IRS is measuring its success in achieving this goal through surveys conducted by a vendor. Taxpayers are being asked to complete a survey to rate the service they received. These survey results are summarized and used to evaluate the overall satisfaction with the service provided by the IRS.

Results

IRS management has not established an effective process to ensure that the survey is conducted appropriately to measure the level of satisfaction customers receive from interactions with all Correspondence Examination program areas. Consequently, the IRS needs to qualify the use of any data from the Correspondence Examination Customer Satisfaction Surveys.

Processing Errors Could Affect the Survey Results

Processing errors, such as assigning incorrect organization codes and the incorrect use of disposal and technique codes, caused some returns to be excluded from the survey. In 1 service center, the use of an incorrect organization code resulted in as much as 80 percent of its work being excluded from the survey. The use of incorrect disposal and technique codes also resulted in 16 percent of the tax returns in our sample either being improperly excluded or incorrectly included in the survey.

Weaknesses in Approving Access, User Profiles, and Inventory Validations Could Affect the Survey Results

Some required procedures in the approval process for granting employees access to the computer system were not consistently followed. Some tax examining assistants and correspondence examination technicians had command codes in their user profiles that would allow them to make inappropriate changes and updates to the records on the database. In addition, inventory validations were not consistently conducted. When taken in the aggregate, these weaknesses increase the risk that the data, which are the basis of the survey, are not totally accurate.

Low Survey Response Rate Could Affect the Survey Results

The Internal Revenue Service Customer Satisfaction Survey National Report - Service Center Examination, dated March 2000, showed a response rate of 26.3 percent for the period July through September 1999. The response rate for all of 1998 was only 24.2 percent. Such low rates increase the risk that the opinions of the few who responded may not match the opinions of the many who did not.

Sample Selection Is Not Always Representative and Random

Surveys are issued to 100 taxpayers per month for each service center regardless of the number of tax returns closed by each service center that month. Because the volume of tax returns closed by each service center varies, this sampling technique does not result in a truly random selection process in which each taxpayer would have an equal chance of being included in the survey.

Summary of Recommendations

To address the issues involving the organization, disposal and technique codes, the Director, Compliance (Wage and Investment Division), should stress using the correct codes during training and require reviews to ensure the proper codes are used.

The Director, Compliance, should re-emphasize the importance of properly following all procedures when granting access to computer systems and re-evaluate the decisions to allow some technical employees to have command codes that allow them to adjust and delete information. If the employees must have adjustment capabilities, the Director should develop controls that will reduce the risk associated with the lack of separation of duties. Additionally, the Director should re-emphasize the need to follow inventory validation requirements and require each Examination unit to forward a validation certification to the Service Center Examination Branch Chief.

The Directors of Compliance and the Organizational Performance Division should disclose the current response rates and study what actions can be taken to increase them. Additionally, they should disclose the sampling limitations encountered and take care not to portray the taxpayers’ opinions obtained as representative of all taxpayers across the nation.

Management’s Response: Management agreed to all of our recommendations expect the one involving the sampling selection methodology. The Director, Compliance, will stress the use of the correct codes and proper procedures for allowing computer access, and will evaluate the need for employees to have adjustment capabilities. Also, the Director will emphasize inventory validations and require a validation certification. The Directors of Compliance and the Organizational Performance Division will disclose the current response rates and use telephone surveys in place of mail surveys. Management stated that the current use of a stratified sample with weighting factors for non-respondent and stratified imbalances is sufficient. Management’s complete response is included in Appendix VII of this report.

Office of Audit Comment: We continue to believe that because of the large variation in the number of cases processed in each service center, the current sample methodology does not allow each taxpayer an equal opportunity to be included in the survey.

Objective and Scope

This audit was performed as part of the Treasury Inspector General for Tax Administration’s overall strategy to assess the reliability of the Internal Revenue Service’s (IRS) customer satisfaction performance measures as they relate to the Government Performance and Results Act of 1993 (GPRA). The overall objective of our review was to assess the validity of the information used to measure customer satisfaction for the Service Center Correspondence Examination function.

We conducted this review in the Fresno (FSC) and Memphis (MSC) Service Centers. We held discussions with the Office of Program Evaluation and Risk Analysis (OPERA) in the IRS National Headquarters to address issues that may affect the validity of the customer satisfaction measure. We also conducted limited tests in the two service centers of correspondence examination inventory controls and the computer program that provides the survey population and the shipment of the resulting data to the vendor. Our tests were limited to assessing the reliability of the control environment in ensuring that accurate data are provided for the survey. Accordingly, we did not conduct a comprehensive review of all Service Center Correspondence Examination inventory controls or activities.

We performed this audit from January to November 2000 in accordance with Government Auditing Standards.

Details of our audit objective, scope, and methodology are presented in Appendix I. Major contributors to this report are listed in Appendix II.

Background

The Congress enacted the GPRA in 1993 to improve the quality and delivery of government services. The GPRA holds federal agencies accountable for program results by emphasizing goal setting, customer satisfaction, and results measurement.

Agencies were required to submit strategic plans by September 30, 1997, covering periods of not less than 5 years forward from the fiscal years in which they were submitted. Strategic plans are to be updated at least every 3 years. The GPRA also requires each agency to prepare an annual performance plan covering each program activity. Finally, federal agencies were to submit a report on program performance for the previous fiscal year to the President and the Congress no later than March 31, 2000, and no later than March 31 of each succeeding year. These reports are provided to the President and the Congress to assist them in appropriating and allocating federal funds. Therefore, it is essential that the data used for the performance measures are reliable and the results are verifiable and valid to ensure that proper conclusions are made by the President, the Congress, and the IRS.

The IRS prepared an interim strategic plan and annually prepares a performance plan. The IRS also established three strategic goals: provide quality service to each taxpayer, serve all taxpayers, and be productive through a quality work environment. Providing quality service to each taxpayer is a key part of customer satisfaction and includes:

The IRS measures its success in achieving this goal by using customer satisfaction surveys to measure its programs. The IRS has contracted with a private vendor to conduct the surveys. One of the customer satisfaction surveys involves the Service Center Correspondence Examination function. Correspondence audits are conducted primarily through the mail by service center staffs and involve the Information Return Program and other correction and examination programs. The survey universe consists of individuals whose income tax returns were audited through a service center correspondence examination.

The IRS designed a computer program that runs at each of the 10 service centers to select the survey population. The program keys on certain Audit Information Management System (AIMS) data fields. The AIMS is a computerized system used to secure tax returns, maintain inventory control of examinations, record examination results, and provide IRS management with the statistical reports required under Examination and Compliance programs. Monthly, each service center creates a computer tape with the AIMS information and ships it to the vendor.

The vendor forwards the information to a sub-contractor who administers the survey process. The sub-contractor mails the questionnaires, tabulates the results, and follows up with a second questionnaire to taxpayers who did not respond initially. Summary results are furnished to the OPERA. The IRS uses the summary information to develop the customer satisfaction measure, which it will report to the Congress as part of the IRS’ budget submission, and to improve service to its customers.

Effective October 2000, the Director, Compliance, Wage and Investment Division (W & I), assumed responsibility for the Service Center Correspondence Examination Program. Previously, the Assistant Commissioner (Customer Service) was responsible for the Program.

Results

The current Service Center Correspondence Examination Customer Satisfaction Survey process gives the IRS the ability to use the survey results to report on two of its four business units. The taxpayers selected for the survey will be serviced by either the new W & I or the Small Business and Self-Employed (SB/SE) business units.

IRS management has not established an effective process to ensure that the survey is conducted appropriately to measure the level of satisfaction customers receive from interactions with all Correspondence Examination program areas. Consequently, the IRS needs to qualify the use of any data from the Correspondence Examination Customer Satisfaction Surveys. Specifically, the survey process contains the following limitations that should be disclosed when reporting the customer satisfaction measure.

Processing Errors Could Affect the Survey Results

The FSC used a range of organization codes that caused tax returns to be excluded from the survey. Also, incorrect disposal and technique codes were used in both the FSC and MSC, which caused some returns to be improperly excluded from the sample universe and others to be improperly included.

Incorrect organization codes were used

The IRS uses a computer program to identify the returns that are subject to the survey. The computer program uses the organization code as one of the selection keys. Returns coded within the range of 5000 through 5399 are to be included on the tapes that the service centers send to the vendor each month. Correspondence Examination tax return information for the 53 taxpayers that we sampled in the FSC was not forwarded to the vendor because of the organization codes assigned to them.

IRM 104.3 requires that Correspondence Examination groups, which primarily audit individual income tax returns in the service centers, use organization codes 5000 through 5399 to help identify the work they have done. Other groups are allowed to use organization codes 5400 through 5999 for other Service Center Examination Programs.

The FSC Correspondence Examination function managers decided to use organization codes 5900 through 5999 (rather than organization codes 5000 through 5399) because they felt the need to keep track of specific program volumes. Because of this decision, about 15,030 tax returns examined by Correspondence Examination (80 percent of all Correspondence Examination returns) were excluded from the tapes sent to the vendor for the period of October 1999 through June 2000.

For this same period, 5,016 tax returns were closed using organization codes 5000 through 5399 and these returns were incorrectly included in the survey population. In analyzing the data provided to the vendor, we found indications that similar conditions existed, to a lesser degree, at two other service centers. See Appendix IV for details.

Incorrect disposal and technique codes were used

In addition to organization codes, disposal and technique codes are used in the sample selection process. Some codes are used to ensure that taxpayers who had undeliverable correspondence from the IRS are not included in the survey. Other codes are used to ensure that taxpayers who responded to a Statutory Notice are not included with those that did not respond.

In our judgmental sample of 103 closed tax returns, we determined that in 16 instances the tax examiner had used an incorrect code. Closing tax returns with incorrect codes resulted in one of two effects. Some tax returns were being improperly included in the survey while other tax returns were being improperly excluded. Additionally, some taxpayers were included in the wrong category (e.g., those who responded versus those who did not respond), which also could affect the survey results. See Appendix V for a more detailed explanation of this condition and disposal and technique codes.

In our opinion, the use of incorrect codes, especially when the taxpayer responded to some prior correspondence but failed to respond to a Statutory Notice, may be caused by differing interpretations of directives, and a lack of adequate training.

The combination of incorrect organization, disposal and technique codes resulted in some returns being improperly excluded from the survey and others being incorrectly included, which could lead to inaccurate survey results.

Recommendations

The Director, Compliance, should:

  1. Issue a directive requiring adherence to IRM 104.3 AIMS Processing Handbook and conduct operational reviews to ensure uniform use of organization codes within Service Center Correspondence Examination functions.
  2. Management’s Response: IRS management agreed with the audit recommendation and when notified that organization codes were used incorrectly, directed the service centers to review organization codes and make corrections as needed. A memorandum to all W&I Service Centers reemphasizing the correct organization codes for correspondence examination cases, as outlined in IRM 104.3, AIMS Processing Handbook will be issued and an IRM Procedural Update Alert will be posted on the Servicewide Electronic Research Program (SERP).

  3. Ensure that the use of proper codes is emphasized during training sessions and included as part of unit managers’ reviews of completed work.

Management’s Response: IRS management agreed with the audit recommendation and will advise centers of a recent change in the technique codes. Management will coordinate the update with the IRM 104.3 owners for inclusion in the next IRM 104.3 update. To reinforce correct procedures, they will develop a training package on Form 5344, AIMS closing document, emphasizing the correct use of disposal and technique codes.

Weaknesses in Approving Access, User Profiles, and Inventory Validations Could Affect the Survey Results

We conducted limited tests of select Service Center Correspondence Examination function controls to assess the reliability of the control environment. A sound control environment is an essential part of ensuring that accurate data are provided for the survey.

Required procedures in the approval process for granting employees access to the AIMS were not consistently followed. Some tax examining assistants and correspondence examination technicians had command codes in their profiles that would allow them to make inappropriate changes and updates to the records on the AIMS database. In addition, inventory validations were not consistently conducted. Failure to follow these procedures could affect the survey results.

Approval process

We selected an interval sample of 100 Examination employees (51 of 309 in the FSC and 49 of 194 in the MSC) who were granted access to the AIMS database to determine if their access to the system was properly approved and if they had only those command codes needed to conduct their jobs.

Before being allowed access to the system, employees must complete an Information System User Registration/Change Request (Form 5081). The form must then pass through several approval levels. The employee’s supervisor, a security coordinator, and ultimately the system administrator all must indicate their approvals by signing the form. We examined a total of 353 Forms 5081. All but one of the forms had the employee’s signature or name on the form. However, 12 Forms 5081 did not have the employee’s supervisor’s approval signature; 86 did not have the security coordinator’s approval signature; and 7 did not have the system administrator’s approval signature.

Appropriate command codes

In the sample of 100 employees, 27 had command codes in their user profiles that allowed them to make changes and updates to the AIMS database, when normally they would not have these codes. Of these 27 employees, 17 were tax examining assistants and 10 were correspondence examination technicians. When classified by job type, 17 of 27 (63 percent) tax examining assistants and 10 of 12 (83 percent) correspondence examination technicians had command codes in their user profiles that allowed them to make changes and updates to the AIMS database.

IRM (Part 114) Compliance and Customer Service Managers Handbook – Sub Section 3.4.1.1 – Security and Integrity Concerns is applicable to accessing automated systems. This section specifies that each employee who is provided access to automated systems is assigned a "computer profile" to limit his/her ability to view and change information based on his/her position. Examiners and their managers should have automated systems research capability only and are not to perform production functions, such as adjustments, changes, and deletions. This separation of duties is a fundamental internal control to ensure adjustments, changes, and deletions are subject to proper review.

Service Center Examination staffs have not consistently adhered to IRS procedures when granting access to the system and assigning command codes to individual employees. Approved access and separation of duties are basic internal controls that, if violated, could result in an increased risk that the AIMS database could be compromised.

Inventory validations

The IRS Service Centers maintain the AIMS database to provide the Service Center Correspondence Examination function information about tax returns in inventory and closed tax returns. However, inventory controls over the AIMS database were not consistently applied, which increased the risk that information on the AIMS database may not always be accurate.

IRM 104.3 AIMS/Processing Handbook requires examination groups to conduct one of three inventory validations: Quarterly inventory validations for examination cases in certain status; Statistical Sampling Inventory Validation Listing which must be conducted 26 times a year and requires a minimum sample of 100 cases each time; or the annual 100% Inventory Validation Listing (IVL) in which all cases are validated once a year.

Both the FSC and the MSC have elected to conduct the annual 100% IVLs. We reviewed the annual 100% IVLs for 19 groups, 11 groups in the FSC and 8 groups in the MSC. In the FSC, 3 groups conducted the required annual 100% IVLs, 5 groups provided incomplete validations and the remaining 3 did not provide any validations. In the MSC, none of the 8 groups provided any documentation to show any attempts to conduct the required annual 100% IVLs.

Additionally, other inventory validations (like monthly validations and workload reviews) that might offset any possible negative effect from not conducting the annual 100% IVLs were also not routinely conducted. Because the required annual 100% IVLs were not consistently performed, and available compensating controls were not employed or in place, there is an increased risk that the information on the AIMS database may be incomplete or inaccurate. If the database is incomplete or inaccurate, then the basis for the survey might be flawed, which could result in inaccurate survey results. See Appendix VI for additional information on compensating controls.

Recommendations

To address the issues of access, computer profiles, and inventory validations, the Director, Compliance, should:

  1. Re-emphasize the importance of following all procedures when granting access to computer systems.
  2. Management’s Response: The Director, Compliance will issue a joint memorandum with Information Systems to all W&I and SB/SE sites to reemphasize existing IRM procedures for completion of Form 5081.

  3. Re-evaluate the decisions to allow some technical employees to have command codes that allow them to adjust and delete information. If the employees must have adjustment capabilities, develop controls, such as random reviews by the unit manager, that will reduce the risk associated with the lack of separation of duties.
  4. Management’s Response: On January 10, 2001, the Director, Compliance, W&I Division issued a memorandum reemphasizing existing IRM procedures on limiting employee access to sensitive AIMS command codes. The Director, Compliance Services, SB/SE Division received a copy of the memorandum for coordination and dissemination to the SB/SE sites.

  5. Re-emphasize the need to properly adhere to IRM inventory validation results requirements by requiring the Examination Units to submit a validation certification to the Service Center Examination Branch Chief.
  6. Management’s Response: An IRM Procedural Update will be issued revising the procedures for inventory validations to include annual Field Compliance Directors confirmation to Headquarters that Inventory validations have been completed. This will be posted on the SERP.

    Low Survey Response Rate Could Affect the Survey Results

    The Internal Revenue Service Customer Satisfaction Survey National Report - Service Center Examination, dated March 2000, showed a response rate of 26.3 percent for the period July through September 1999. The response rate for all of 1998 was only 24.2 percent. These low response rates mean that the IRS is attempting to project the results of a relatively small percent of the taxpayers who responded to a much larger percent of the taxpayers who did not respond.

    IRM Section 1282.43, Procedures for a Statistically Valid Sample Survey When No Comparisons Are Made, specifies that " . . . the response rate for all surveys conducted by IRS should be at least 70 percent." IRM Section 1282.72, Missing Data, states that " . . . because non-response is a cause of non-sampling error, all personnel conducting surveys should use follow-up letters to try to achieve at least a 70 percent response rate." The General Accounting Office’s (GAO) Program Evaluation and Methodology Division Guidance 10.1.7 specifies that " . . . in order to make plausible generalizations, the effective response rate should usually be at least 75 percent for each variable measure."

    The GAO guidance also provides that non-responses must be analyzed because high or disproportionate non-response rates can threaten the credibility of the survey and the ability to generalize to the population. Accordingly, the non-respondent population should be analyzed unless the response rate is over 95 percent.

    The vendor informed us that if a taxpayer does not respond to the initial survey, it follows up by mailing a second survey to that taxpayer. The vendor takes no further action to increase the response rate, such as attempting to make direct telephone contact with taxpayers who did not respond to the initial survey. The vendor agreed that the non-respondents’ attitudes are often different from those of respondents and that the low response rate should be considered when reporting the survey results.

    Recommendations

    The Directors, Compliance, and Organizational Performance Division, should:

  7. Fully disclose the survey response rates and caution how the results should be interpreted in any documents in which the results are published.
  8. Management’s Response: The Directors agreed the response rates are low, and the rates will be disclosed with the appropriate caution. Pending final receipt of funds, telephone surveys (which, in general, have a higher response rate) will be used rather than mail surveys. Telephone surveys will use up to seven callbacks to try to reach the taxpayer. In addition, pre-notification letters will be sent to taxpayers before contacting them by telephone.

  9. Study what actions can be undertaken to increase the response rates to IRS required levels.
  10. Management’s Response: IRS management agreed the response rates are low, but rather than study potential alternative actions to improve mail surveys, IRS will use telephone surveys for Service Center Examination.

    Sample Selection Is Not Always Representative and Random

    We reviewed the sampling methodology used to select the taxpayers chosen to receive the Customer Satisfaction Survey. Despite a wide variation in the number of cases closed by Service Center Correspondence Examination in the 10 service centers, 100 cases per month are selected for each center. The survey results are then aggregated for all 10 centers and projected over the survey population.

    There is a significant variation in the number of tax returns closed by Correspondence Examination Units in each service center. In 1999, the volume of closed Correspondence Examination cases ranged from a low of 27,971 in the Atlanta Service Center, representing only 4.08 percent of the national total of all examined tax returns, to a high of 134,047 in the Ogden Service Center, representing 19.54 percent of the national total. The following chart shows the number of tax returns closed in each service center.


    Service Center

    Number of returns closed by correspondence examination

    Percentage of returns closed by correspondence examination

    Andover

    44,567

    6.50%

    Brookhaven

    125,841

    18.34%

    Philadelphia

    42,968

    6. 26%

    Atlanta

    27,971

    4.08%

    Memphis

    90,554

    13.20%

    Cincinnati

    54,981

    8.02%

    Kansas City

    36,382

    5.30%

    Austin

    44,790

    6.53%

    Ogden

    134,047

    19.54%

    Fresno

    83,869

    12.23%

    TOTAL

    685,970

    100.00%

    Source: IRS data from the AIMS Closed Case Database for 10/01/98 - 09/30/1999.


    If the survey results are to be used to estimate or project over a population larger than the sample, the type of sample taken must be a random statistical sample. In order to obtain a truly representative random sample, every taxpayer contacted in the Correspondence Examination process must have an equal chance of being selected. Since that is not being done, the sample is not representative of the total population of taxpayers.

    The vendor and IRS management stated that the sampling methodology was designed to gather data on a service center basis rather than a national basis. The IRS and vendor agreed that the sampling methodology would be geared to attempt to ensure that at least 400 taxpayer responses for each service center were obtained per year. The IRS and the vendor believed that 400 responses would be the ideal target in order to provide meaningful data for each individual service center. The agreed upon sample selection method, while appropriate for individual service center results, does not suffice for a single "national" survey result as reported in the March 2000 National Report and other performance-related documents.

    The survey must be random and representative of the total population in order to provide "national" data upon which predictions about taxpayer perceptions of the service they received can be made. That means that every taxpayer has an equal chance to be included and issues like significant population variations are accounted for in the survey sampling plan. If such survey techniques are prohibitive due to time or cost constraints, then the limitations of the sample methodology must be fully disclosed.

    Recommendation

    The Directors, Compliance, and Organizational Performance Division, should:

  11. Ensure that the sample selection methodology accounts for the population variances among service centers, and if that cannot be done, then properly qualify the sample limitations.

Management’s Response: IRS management disagreed with this recommendation. IRS management stated that the current methodology of using a stratified random sample with weighting factors to correct for non-response and stratification imbalances are well accepted sampling and statistical analysis techniques. Each person has an equal chance of being selected within his or her stratum and that the results are valid and do not need to be qualified in any way.

Office of Audit Comment: We agree that response and non-response stratification is needed. However, we still maintain that due to the large disparity in the volume of cases processed in each service center, that disparity must be accounted for in the sample methodology as well. A statistician we consulted stated "The selection of 100 cases per month per center is not a valid process if one wants to estimate the national population. Since the population per center varies by over 100,000 between the smallest and largest, each case would not have an equal chance of being selected."

A second statistician we consulted also noted that the weighting was not broken down by individual service center, and the survey did not derive estimates for the average customer service rating in each service center as required by the stratified random sampling design formula.

Conclusion

The processing errors, and the inconsistent application of controls regarding approval to grant system access, user profiles, and inventory validations, increase the risk that data provided to the vendor for the survey are not always accurate. This increases the risk that the survey results are not a complete and true reflection of taxpayer opinions. Additionally, the survey’s low response rate and selection methodology also increase the risk and must be fully disclosed, and the data must be properly qualified.

Appendix I

Detailed Objective, Scope, and Methodology

The overall objective of this review was to assess the validity of the information used to measure customer satisfaction for the Service Center Correspondence Examination function. To accomplish this objective, we reviewed the process used to identify taxpayers for inclusion in the vendor survey process and the application of the data received from the vendor. We conducted the following tests:

  1. Determined if the Audit Information Management System (AIMS) database is an accurate and valid source of information for the sample selection of Customer Satisfaction Surveys for the Service Center Correspondence Examination function.
    1. Conducted a test to determine the accuracy of the data on the AIMS 7107 tape sent to the vendor.
    1. Compared AIMS 7107 data to a judgmental sample of 53 closed examinations (worked April 6 through April 10, 2000) in the Fresno Service Center (FSC) and 50 closed examinations (worked May 5 through May 8, 2000) in the Memphis Service Center (MSC).
    2. Reviewed the AIMS 7107 file to determine whether the data in the fields agreed with the case selection criteria.
    3. Reviewed the closing codes on the AIMS 7107 file and reviewed the tax returns in step I.A.2. to determine whether they agreed.
    4. Compared MSC and FSC AIMS 7107 data to determine whether information provided to the vendor was consistent between service centers.
    5. Analyzed the Functional Specification Package for the AIMS 7107 file to determine whether the programming logic matched the survey extract criteria in the Request for Information Services (RIS). We did this by comparing the information fields on the 7107 tape to the selection criteria identified on the RIS.
    1. Analyzed Executive Management Support Systems, Statistics of Income, and AIMS data to identify the percentage of Examination case closures that had a Masterfile Tax Code (MFT) of 30 and the corresponding staffing level of each Service Center Examination Division. Determined whether there was a disproportionate number of examinations in one or more service centers that may require special weighting of sample results. Determined whether only tax returns with MFT 30 were included in the sample population.
  1. Evaluated the internal controls over the data and processes used.
    1. Interviewed the AIMS Coordinator to determine whether inventory validations and operational reviews are being conducted as prescribed by the Internal Revenue Manual.
    1. Determined whether Examination groups are using AIMS or other local inventory reports to validate the accuracy of AIMS data and determined whether all tax returns are accounted for.
    2. If the inventory validations are not being conducted, determined whether other compensating controls are present to ensure that service center examinations are not being omitted on the AIMS.
    3. Determined whether Service Center Examination Units are conducting quality reviews using AIMS reports, such as the Status Workload Report.
    1. Determined if the inventory controls are sufficient to ensure the accuracy of the AIMS database.
    1. Determined if inventory validations are completed in the FSC and the MSC. Evaluated the:
      1. Frequency of the validations.
      2. Scope of the validations.
      3. Reporting of the results.
      4. Corrective actions taken based on the results.
    1. Determined the extent of the operational or other reviews conducted by the AIMS Coordinators in the service centers. Evaluated the:
    1. Frequency of the validations.
    2. Scope of the validations.
    3. Reporting of the results.
    4. Corrective actions taken based on the results.
    1. Evaluated the controls over any discretionary projects in the FSC and MSC and how management ensures that the development of the projects does not include taxpayer contacts on uncontrolled examinations.
    2. Evaluated the process that the Closing Unit uses to ensure that it receives all of the tax returns from the Correspondence Examination groups.
    1. We used interval sampling to identify 100 employees using the AIMS system (51 of 309 employees in the FSC and 49 of 194 in the MSC) to determine if their Information System User Registration/Change Requests (Form 5081) had been properly approved and if the employees had only those command codes needed to conduct their jobs. As some employees had multiple forms on file, we reviewed a sample of 353 forms.
  1. Determined whether the Internal Revenue Service (IRS) National Headquarters had plans to ensure that survey results will be applicable under the new organizational structure (business units).
    1. Interviewed Office of Program Evaluation and Risk Analysis personnel and determined if the vendor can readily produce survey results along the new business units. Determined if there has been any consideration to restructuring along those business lines.
    2. Determined if the procedures for conducting the survey using the AIMS as the source for case selection will allow identification by business unit.
  1. Assessed the population covered by the survey and the survey results.
    1. Reviewed the Internal Revenue Service Customer Satisfaction Survey National Report - Service Center Examination, dated March 2000, determined the survey response rate.
    2. Determined whether the vendor has an adequate follow-up procedure for a low response rate to Service Center Correspondence Examination surveys.
    3. Determined if any Service Center Correspondence Examination tax returns are not covered by the Service Center Correspondence Examination Customer Satisfaction Survey.

Appendix II

Major Contributors to This Report

Maurice S. Moody, Associate Inspector General for Audit (Headquarters Operations and Exempt Organizations Programs)

John Wright, Director

Kevin Riley, Audit Manager

David Cox, Senior Auditor

Jim Popelarski, Senior Auditor

Gene A. Luevano, Auditor

Bill Thompson, Auditor

Appendix III

Report Distribution List

Deputy Commissioner N:DC

Commissioner, Wage & Investment Division W

Commissioner, Small Business/Self Employed Division S

Assistant Deputy Commissioner N:ADC

Chief Financial Officer N:CFO

Deputy Chief Financial Officer, Strategic Planning and Budgeting N:CFO:SPB

Deputy Commissioner W

Director, Compliance W:CP

Director, Legislative Affairs CL:LA

Director, Office of Program Evaluation and Risk Analysis (OPERA) N:ADC:R:O

Director, Organizational Performance Division N:ADC:T:OP

Director, Strategy and Finance W:S

Office of Management Controls N:CFO:F:M

Chief Counsel CC

National Taxpayer Advocate TA

Audit Liaisons:

Deputy Chief Financial Officer, Strategic Planning and Budgeting N:CFO:SPB

Director, Compliance W:CP

Director, OPERA N:ADC:R:O

Director, Strategy and Finance W:S

Appendix IV

Organization Codes

Within the service centers, certain units are responsible for auditing different types of returns and/or performing different functions. Correspondence Examination Units perform audits of individual taxpayers and primarily concentrate on tax returns involving Earned Income Tax Credit issues. Classification Units review tax returns to identify audit issues and route selected returns to other Examination Units for audit. Additionally, some Classification Units work selected tax returns including Estate and Gift returns, certain business returns, and amended returns.

In the Fresno (FSC) and Austin (AUSC) Service Centers, tax returns worked by the Classification Units are being inappropriately included on the tape. The group classifying tax returns in the AUSC closed 3,389 returns between October 1, 1999, and August 2, 2000, using organization codes 5000 through 5099. Estate & Gift tax returns, in addition to Individual Income tax returns, were included in the data sent to the vendor from the AUSC and the Cincinnati Service Center (CSC). The AUSC closed 20 Estate and Gift tax returns between October 1, 1999, and August 2, 2000, using organization codes 5395 and 5399; the CSC closed 265 Estate and Gift tax returns between October 1, 1999, and May 19, 2000, using organization code 5116.

In both instances, the wrong codes caused the tax returns to be improperly included in the survey population. The improper inclusion or exclusion of tax returns can affect the survey results.

Appendix V

Disposal and Technique Codes

Disposal and Technique Codes are used to identify whether a taxpayer responded or did not respond to Internal Revenue Service (IRS) correspondence. Technique codes are also used to ensure that taxpayers who responded to a Statutory Notice are not included with those who did not respond. Disposal codes are also used to ensure that taxpayers who had undeliverable correspondence from the IRS are not included in the survey.

Disposal Code 10 - DEFAULT - Applies only to returns when the taxpayer fails to reply after the issuance of a 90-day letter.

Disposal Code 13 - Undeliverable 90-day Letter - Applies to returns closed after the issuance of a 90-day letter, if the 90-day letter is returned as undeliverable.

Technique Code 2 - Should be used on all cases with a response from the taxpayer. It is also used when there is a response from the taxpayer and the case is still being closed as "default" using Disposal Code 10.

Technique Code 7 - Valid on Disposal Codes 10 and 13 when the taxpayer did not respond to any correspondence. Technique Code 7 should be used on all "No Reply" cases that default using Disposal Code 10. All "Undeliverable" cases should be closed with a Technique Code 7 and Disposal Code 13.

Use of improper technique and disposal codes will result in returns being improperly included in the survey and other returns being improperly excluded. Improper coding will also cause returns to be included in the wrong category (e.g., those who responded versus those who did not respond). Both of these effects could affect the survey results.

Appendix VI

Inventory Compensating Controls

There are three key controls over the Audit Information Management System (AIMS) database: annual 100% Inventory Validation Listings (IVL), compensating AIMS report validations, and operational reviews. Two controls, compensating validation controls and operational reviews, are optional controls that might offset any negative effect from not conducting the annual 100% IVL. In the case of the Fresno Service Center (FSC), 3 of 11 units conducted annual 100% IVLs, and we comment only on the remaining 8 units. In the case of the Memphis Service Center (MSC), none of the 8 units provided documentation to show any attempts to conduct the annual 100% IVLs.

Examples of AIMS compensating controls are:

Operational Reviews are semi-annual reviews of a unit’s adherence to procedures.

The FSC and MSC units provided the following documents on compensating controls for the period May 1999 through April 2000:

Type of Control

Total number that could be generated per Examination Unit/year

Complete validations or reviews provided

Partial validations or reviews provided

FSC

MSC

FSC

MSC

FSC

MSC

Monthly Validation

96

96

4

0

7

10

Status Workload Review

192

192

19

0

5

0

Operational Reviews

16

16

0

1

0

3

Appendix VII

Management’s Response to the Draft Report

The response was removed due to its size. To see the complete response, please go to the Adobe PDF version of the report on the TIGTA Public Web Page.