Quality Assurance Measurement Activities

Internal Quality Assurance programs for Denver Health Managed Care Program, Denver Health.

Employee Health Insurance (historical 2 years reviewed + 3 years reported annually):

  1. Ambulatory Care of High Risk Asthma Patients.  6-8 measures per patient, gender, 3 age ranges.
  2. Diabetes Care and Prevention.  ~40-60 measures or calculations per patient, gender, age range.
  3. Cervical Cancer Screening and Follow-up.   4 measures per age group.
  4. Breast Cancer Screening and Follow-up.  4-5 measures, age only.
  5. Identification of Pre/Post-partum Patients at risk for Diabetes. 7 measures per group, by gender, by age ranges.
  6. Chlamydia screening (16 to 26 yo). 4 measures, gender, age-ranges.
  7. Smoking Cessation.  3 measures each for gender, 3 age ranges.
  8. Management of CHF. 2 measures, gender only.
  9. Colorectal Screening.  4 measures, gender only.
  10. Hypertension Screening.  2-4 measures, gender, age range.
  11. Psychiatric emergency hospitalizations.  4 measures, gender, two age ranges.

Medicaid Choice:

  1. Patient Compliance with CDC Child Immunization and Well Visit Requirements.  12 measures.  Reported to State.
  2. Diabetes Care Management.  ~40-60 measures per patient.  Reported to State.
  3. Ambulatory Care of High Risk Asthma Patients.  6-8 measures per patient.
  4. Review of Visits for Pre/Post-partum Patients with Pregnancy-induced Diabetes Risk
  5. Utilization of Managed Care Pharmacy Services (review of monthly prescription cost per member, per script and per member, per day average, as part of internal/Medicaid program cost-lowering practices.)  Reported to State.

Medicare/Medicare Choice:

  1. Diabetes Care Management, 10-15 measures, by gender.
  2. Glaucoma Screening (preliminary), 2 measures, by gender.

Annual State Medicaid Program-required reviews (NCQA/State Medicaid Program, State of Colorado Studies):

  1. Management of High Risk Pregnancies (18-55 yo Medicaid members)
  2. Diabetes Care and Prevention (all ages)
  3. “Childhood Immunizations and Well-Visits” Medicaid Performance Improvement Project (0-2 yo).  Related completion of Well-visit requirements to completion of Immunization by 2 yo.  Identified most important well-visit one had to undergo in order to assure immunizations were completed; related likelihood of completion to each well-visit activity.  Identified single most important visit linked to probability of completion of immunization.  Identified high risk groups. This Medicaid Choice PIP resulted in perfect scores upon review in 2006.   
  4. Engagement in Appropriate Preventive Care Program Activities by Prenatal/Postpartum Women (16 – 45 yo)

Other Activities

  1. Annual HEDIS activities.
  2. Annual hypertension review for national HTN program newsletter published by related npo.
  3. Quarterly Population Reviews for Denver Health Managed Care Population.  Data used to design a standard 35 page report with age ranges reviewed for all internal, Medicaid, Medicare and CHP+ program related studies to be engaged in.  Manual data gathering and entries required; pyramid/chart/graph production was automated for all studies requiring this information.  Perot Systems storage/datasets. [Self-invented/created using PEROT system programming language.]
  4. Monitored and periodically updated database devoted to Policy Updates and Changes (monthly and quarterly activities).
  5. Updated and produced reports for quarterly statistics kept on inpatient and outpatient consumer complaints, billing complaints, etc.  Automated this statistical evaluation and reporting process.  [Self-invented statistical program.]
  6. University of Colorado HSC-sponsored Colorectal Screening Intervention Program, an NIH grant sponsored 1.25 year Pre-Post-intervention Study.
  7. Design of an automated data analysis system for Monitoring the Care Management Program and Team Operations.  Periodic Internal QA assessments. [self-created]
  8. National NCPDP Pharmacy Database: Quarterly Evaluations of Pharmacy Services provided to Medicaid Choice Members.  Identifying high cost members who receive scripts from external providers at exceptionally high costs (10x+ > internal pharmacy cost).  [self-invented]
  9. National NCPDP Pharmacy Database: datasets developed and used for annual evaluations of Employee Health and Medicaid Members with Diabetes [self-invented]
  10. National NCPDP Pharmacy Database : datasets developed and used for evaluations of Employee health and Medicaid Members with Asthma [self-invented]
  11. An Evaluation of the use of a local CDPHE-operated Smoking Cessation Hotline program used by Denver Health Managed Care members, 2 years. [self-invented]
  12. Design of a monthly reporting system for 302 Population Health Measures.  Utilized intranet-stored standardized datasets automatically imported into Access program. [self-invented]


Quality Assurance Plus

The typical NCQA/Medicaid-Medicare Performance Improvement Projects (PIPs) or Quality Improvement Activities (QIAs) consist of a combination of Qualitative-Quantitative research methods.  Most of the work devoted to these types of quality assurance (QA) activities focus on the quantitative research method, in which two subsequent years are tested for specific outcomes in order to determine what changes have taken place over the past year and to document changes that take place due to intervention activities commenced as part of the QA activity or program.

The majority of activities engaged in with regard to PIPs/QIAs are quantitative and employ either a non-parametric method of evaluating outcomes of a parametric evaluation method.  Examples of the non-parametric method include measures of failure or success, by comparing the outcomes for the previous year with those of the current year about to be reported to QA program overseers, various levels of management, accrediting agency review staff, and your related contracting agencies (the Medicaid/Medicaid outcomes IRR independent (3rd-party) reviewer, and any approved NCQA- or HEDIS-reporting agencies). 

The standard report for these activities tends to have fairly consistent and standardized reporting techniques across all or most agencies overseeing the QA process as an outside agency.  The quantitative portions of these reports are fairly obvious to their users and are not really in much need of description.  The qualitative measurement sections are mentioned in passing in most of these reports, but are never really elaborated upon too much and when this data is included on the final PIP or QIA report, the value of this data as a qualitatively analyzed portion of the overall report is never really taken advantage of.

The section of a standard report that has qualitative value is the barriers section of the standard PIP/QIA reporting form.  The purpose of this section is to identify what barriers exist to the changes that an institution of program is trying to make.  Examples of these changes include such things as limited office hours, which impacts the availability of a desired intervention activity (i.e. blood test, counseling service, etc.), or the  availability fo a given screening program.  When such an event is identified as a barrier, the goal is to design an intervention in which that barrier is either reduced  in terms of impact or is completely eliminated.   The ways in which this barrier is then dealt with are then noted in text reporting or table form, but not quantification of this qualitative problem is really never engaged in.

Currently, the standard methods for scoring a standard QA program is to define the primary goal or purpose for this activity including population to be reviewed for the study, provide a reason for engaging in this research project, detail the measures that will used to score the population being reviewed, detailed the research methodology and how data will be accumulated and evaluated, provide evidence for the methods being used to perform these analyses, note the reliability of the methods and the internal and external applicability of your outcomes, engage in the measure and database it, analyze it and then report the outcome on a final form, including statistical measurement outcomes, graphs of these outcomes, etc.  This bulk of the work is then followed by a written summary and review of the outcomes, followed by a tabulation of suspected or identified barriers, and then proposed intervention activities for the future designed to deal with one or more of these barriers.

The first 9/10ths of this description is for the most part the primary activities one engages in when documenting QA program activities and outcomes.  The final review of potential barriers are only indirectly dealt with, for they are often used to define what the next intervention activity will be, leading the program manager to next try and determine what measures will indicate whether or not a future activity will have any successful results.

One of the major parts of the standard QIA/PIP that is not focused on as intensely as it should be perhaps is the barrier-related issue.  This too requires some sort of quantification method be used to document this part of the QA activity.  To date, QA program success is determined most by a comparison of past versus present/recent outcomes, presumably with the recent outcomes documenting the impacts of an intervention carried out during or just prior to the study year.  This study is then repeated in order to demonstrate either continuance of the change being measured or to just demonstrate that some form of success of this program has taken place, with more certainty as evidence by the ongoing change.

Some QA activities are not designed for use as some sort of change in performance measure.  The best example of this is the Smoking Cessation program.  Numerous agencies engage in this popular health improvement population measure due to the ubiquitous nature of smoking as a measurable and preventable health care issue.  In some institutional settings, these programs occur as yearly events regardless of increases or decreases in smoking rates, in which case this study serve more as one of many measures made to document overall population health.  The other approach to this type of study is to assume that the patient is unlikely to change significantly due to anything added to a program, and so this QA activity should be a measure of clinical activities and performance.  Is the patient’s smoking history reported with most or all non-specialized visits?  Is the amount of smoking documented?  Are intervention activities that were engaged in documented?  etc.

The way to document and quantify barrier related information is to select one or a few NCQA/HEDIS-related measures, and then add a step to the barriers analysis method in which specific barriers are defined at the clinical level and the amounts to which these various barriers impact the physician’s clinical activities and performance be documented.  The methodology requires that a clinician-targeted survey be used to document all barriers and quantify their amounts relative to one another.  These barriers may then be individually analyzed and defined as system-related, institutional in nature, policy-controlled or restricted, events that occur at a clinical/clinician level, and events that occur are requirement intervention targeted at the patient level.   The goal of this approach is to identify the most changeable activities at the provider level, and then the patient level, and try to select barriers that can be eliminated that impact a high percentage of cases and QA outcomes. 

Such an interpretation of how to implement intervention programs is typically not discussed in most sessions and training activities related to how to engage in the designing, planning and implementing processes for each of these types of programs.  This method requires an additional measurement process to be engaged in with which barriers themselves are documented and then quantified, and then certain types of intervention needs and activities likewise be planned, carried out and then quantified as part of the measure of success for the QA program.  This method of analysis requires a combined qualitative text analysis technique added to the traditional quantitative measurement methods which are already in place.  If done voluntarily, this addition step adds strength to all future QA reports generated for review by future reaccreditation agencies.