“. . . the solutions to our problems lie outside the box.”

Aviation Week & Space Technology, July 1975

NOTE: This study was supported and supervised by former Perot Systems (now Dell Perot), and internal IPA, HSR, and IT/PHI groups from 2004 to 2005.  All reports generated since then follow previously agreed upon institutional and federal program IP and PHI rights and regulations.  Datasets have been slightly modified without modifying statistical outcomes.  Data sources have been renamed and/or provided with a theoretical identifier of data content.  Only age and gender identifiers are presented in the original format.

Please note, the algorithms used here were developed at Portland State University in 2000 as part of the 2000 Census projects engaged in with input and feedback from the Assistant Director at the time for the demographic research center on campus, who has since removed to a new position. (Due to IP sharing policy conflicts; he was the only one to successfully predict nearly all of the demographic changes by county and especially voting precincts).

Population Health visitors should see: https://brianaltonenmph.com/gis/population-health-profiles/part-iii-population-health-application/special-topics/

Part I.  Introduction, Theory, and Background

The following is an old formula with new life.  It originated as a formula I developed for analyzing flood plains and transects  raster imagery and DEMs.  It was designed to identify such things as changes in a land surface over time or changes in a land surface over a specific distance based on regular transect analyses.  The goal was to develop a way analyze surfaces for risk areas based on flood behaviors.   The problem with analyzing flood activities during the mid-1990s was you couldn’t use the standard data available for elevation since elevation was a constantly changing value over space that was always related to the exact same surface–sea level.  So if you were 1000 miles up the Mississippi River, the elevation of the flood plain surface was provided to your relative to the Sea Level in the Gulf.   As you went down the Mississippi River, this elevation value always reduced, approaching Gulf sea level values.  To understand floods, one has to related river surface levels and potential flood surface levels to the immediately environment placed along the edge of the river, not the Gulf of Mexico water surface. 

To correct for this problem I developed a way to rastorize local water edge elevation numbers and assign them to linear raster depiction of the center of the river.  This raster line was then slanted to produce a slope of zero from its beginning to end.  This effectively made the river seem perfectly flat, and it is that flat surface that all neighboring land surface data are then modified in order to relate them to the elevation of the closest water level, corrected for the value zero.  Now the land surface rasters could be evaluated relative to the closest water body elevation, and a height of 15 feet above closest river surface could be mapped.

It ends up this ever-changing river surface elevation once corrected has features that mimic ofther surface feature transect properties.  Whereas rivers constantly move downward, land surfaces undulate up and down.  Applying the same mathematics to a lateral versus longitidinal river transect provides a different interpretation of the same section or raster point in that river GIS raster dataset.  One can compare one undulating surface to another, using the same formula used to detect and correct for changes, to measure how much change is taking place each time a change happens.  This led to the methodology detailed here on how to analyze transects, profiles, and other changing, irregular line depictions, in order to define where the greatest changes happen.   What I added to this methodology was a method for identifying where statistically significant differences exist in the numbers recording that change.  These statistically significance measures of the same values tell us where a change has occurred that has to be reviewed, such as when a significant landsurface shift occurs due to an earthquake.

Now, there are already more perfect ways of evaluating the earthquake landsurface change information provided by a number of companies.  This method was applied instead to evaluating curve relationships for different groups of data such as facial surface transect patterns or, in particular, population pyramid differences between two different groups of people.  This latter application makes the best use of this algorithm, and it is an easy way to compare population profiles at the smallest levels.  For popualtions, age-gender relationships can be compared to each other.  For business analysts, cost versus age-gender can be compared, for example related to measuring participation in sport or recreational activities, income levals in relation to gas and fuel expenditure as an age-gender feature, or market activities and shopping expenses paid in relation to age-gender behaviors.  In medicine, this method shows that cost is related directly to age in that there is an exponential increase in costs incurred as one gets older.  By relating this obvious feature back to the age-gender curve, we can use an age-gender curve to predict where costs will be highest, and if they are higher than some standard cost curve already generated from past experiences, or a control group.  Whereas most curves generate a measure of difference as one figure, this method generates these cost-age or measure 1-measure 2 differences as multiple results varying over time, age and gender.  This way you can see where the greatest change is occuring and causing these major differences to exist between two populations. 

The final math for this analysis will not not provided for now.  Only the steps are covered.

Steps to making this Discovery. 

This particular project requires some documentation for IP purposes.  This was not a discovery that came overnight.  It was not a simple “Voila!” and it was there.  To understand the value and theory of mapping, you have to run through numerous tests and formulas.  One out of ten or less will be for some good.  Many of the rest will be interesting, but not really add enough to the methodology to be considered a failure or a success.  But to develop the more complex formulas you first have to experience making the first versions of whatever it is you are doing.  That is how I came upon the way to analyze demographic data using a spatial analysis formula I developed for areal comparisons.  The good thing about statistics is that old formulas can often be applied to new things, in new subjects, to answer research questions unrelated to the original use.

Step 1.  1997.  Developed an equation to produce a mock or artificial land surface along with a river flows, mimicking the flow patterns of most known rivers.  Standard non- or minimal-ox-bow producing meandering rivers had flow patterns consisting of 87-93% of their flow defined by the linear portion of the model, using a quadratic equation.  Surface planarity defines the beginning and end of any and all flow patterns.  The remains behaviors are defined using a cuboidal equation; this accounts for the curving and deviations from the expected seen for meandering river.    As part of this project, linear longitudinal and lateral transects of the mid-river were produced and this form of profiling a river bed reviewed. 

Step 2.  1998.  Applied this to work for my thesis on Cholera and the Mississippi River, creating a transect of the entire Mississippi River from Hudson Bay to the Gulf of Mexico.  This transect was evaluated based upon sea level.  The riverbed normalized longitudinal transects of the Mississippi for two different times frames could be compared for documentation of topographic changes at the delta end of the river (where vibrio cholera grows) and certain portions of its mid-states region (low elevation above closest river surface level regions).

Step 31998/9.  Duplicated 1997 work.  Applied this modeling of river beds to a smaller creek, and modified the formula so as to correct for surface planarity and look at local land elevation above closest river surface elevation instead of the actual sea level.  Applied correction formula designed to cancel out elevation changes over space relative to sea level, reassigning elevation values to river surface instead.  Used this to map out disease patterns when elevation above sea level becomes the primary indicator of risk.  Identified where significant changes occurred based on new formulas.

Step 4.   2000.  Applied new transect formula used to define statistically significant regions to line drawing instead of river edge and river bottom transects.  Applied it to comparing two faces, for statistically significant differences in common identifying features like nose size and shape, chin protuberance, eye brow ridge, etc.  Applied this to local population grid mapping project for 2000 census with on-campus Pacific NW Population Research Center.

Step 5.  2004.  Applied this approach to population pyramids, comparing male to female age distributions, followed by one population versus another; developed a way for determining if there is any statistically significant differences between two completely different population sizes.  Tested and applied this to populations of 2500, 27,000 and 60,000, versus base population of 250,000 to 450,000 depending upon the year and month of each study (baseline population kept growing).  Mathematic equations developed for testing/proviing statistically significant differences between two pyramid forms. 2004 to 2006.

Step 6.  2005.  Developed three very different formulas for comparing two populations: first method is applied to very low total N populations, second to any population size but not necessarily reliable, and the third (most reliable) for any two populations of any two sizes.  2005.

Step 7.  2005.  Afterwards, applied this same to costs, comparing total cost between genders, and then to total population.  Applied stat sig technique to define where (at what age) cost become significant due to test population age-gender distributions, meaning cost is due to large numbers of that subset of the total population (more children than expected requiring more well visit care, more patients than the norm for over 65, more women in their childbirth years than normally expected, more teens that expected in the alcohol/drug testing years, etc.) . Developed technique for comparing costs ($10M+ range) to patient age-gender groups (50,000+ range), using normalization formula developed for comparing two very different curves.

.

Early Examples of Use

In the first example of surface transect analysis, there are three transects taken of a river bed.  The section reviewed is about a mile in length, with three transects taken a mile apart from each other.  Imagine for a moment (not really, but imagine) that this is a place near the deep river valley of Snake River, with smaller stream beds flowing parallel and meandering-parallel to the main streamedge due to well cut terrain features.   The questions might ask for this type of analysis would be how do the peaks vary over the length of this study area?  How are the tributaries interacting with adjacent streamedge?  How does the depth of the river impact the immediately adjacent land surface?

There are slight differences that occur from one end of this region to the other end.    Some sections have the peak appear to get taller, or result in a deeper river bed, while others have much shorter peaks and cliff edges.  In some transects of the river and adjacent beds we see a well defined flood plain developed due to particular local features, and some sections with no flood plain at all.  Even small changes across a fairly flat plain far away the river edge can be such, such as a reduction in a ridge left by an old parallel braided stream or narrow ox-bow formation, versus the well-carved non-changing bed of a channel with only its river bottom topography changing overspace.  Each of these slight surface undulations can be magnified in remote sensing software (magnified z), but still we haven’t a way to statistically define each square area or cell of these points (grid cell centroids) relative to their neighbor, except by using some the standard formulas out there in the software developed by Clark University and such, which are design to measure these changes along a z-axis by visualizing and comparing two perfected overlain X by Y projects of the same space, not necessarily designed to evaluate in different transects across the same size of terrain, adjusted for comparisons, along the same modulating x-y defined transects.

If we view the above transects as modifications in the same surface over time, the following results can be obtained.

This time the transects used define surface change over time.  With the above interpretation of the same lines we expect to see signs of erosion and aging such as demonstration of changes in the higher elevation regions, talus and alluvium slope and topographic change, some filling in or topographic change for flat areas which originally had small peaks that eroded away as well as the development of new depressions in the terrain or holes.  We also see signs of possible erosion along fast flowing streams, or perhaps ox-bow and braided rivulet formations occuring adjacnet to major stream or river beds assuming the hydrological features, substratum and topography are set up for this.   

The same terrain profiles or transects not above fort the spatially distinct and temporally distinct transects might also be related to other undulating surfaces with unique spatial relationships or three-dimensional spatial features, such as faces:

The profiles of faces are much like transects taken of the land surface topography around rivers and streams (in fact they were used for the earlier examples).  This means that a formula used to compare transects of a river bed or topographic region can also be applied to studying facial silhouettes.

Once the two are normalized in terms of size and placement of one key measurement index point or indicator, similar features for each of these transects can be compared with each other in terms of two axes–the first representing depth (x-axis) and the second representing distance between two nearby features (y-axis), or vice versa depending on how you term things.  These values can be compared between objects to determine which two are most similar, if not where do they differ in some statistically significant fashion, and if you have a data library of these particular features, which one is most likely to have come from the individual you would like to match a profile to? 

When comparisons are made of two objects, a normalization process has to take place to allow these projections to be compared.  There are a number of ways to do this normalization process, and see how this contrast and compare processwill  work.  The transect itself may be lied side by side for visual comparison, with the aid of the computer moving this surface and testing its fit once it is perfectly positioned.  This is a geometric way of the comparing the two forms, identical to the vector methods commonly employed for many such analyses.

Another way to compare and constrast is through a grid analysis, focusing on the edge of the two objects being compared.  In spatial GIS raster systems, there are algorithms in place for exaggerating the differences that exist when two edges are the same.  In a technique similar to photometrics methodologies, a grid can be overlain on the above profiles or transects to do similar grid comparisons.  The key limitation to this methodology pertains to grid cell size, the smaller the cells the more accurate your measurement method is.  But these smaller cells also require more storage space and time in order for the calculations to be completed.

In the above example again, to compare the different surfaces, when the photo or image sizes do not match, these forms have to be reprojected and normalized–made comparable in size to each other.  Then the statistical analysis technique is applied to see where statistically significant portions of the paired datasets exist.    In the case of profiling, the input profiles are compared with a library and the best fit is found.  In the case of comparing differences, where statistical significance exists using the numbers method developed for this analysis, the resulting output demonstrates where these differences exist and to what extent relative to each other.  This is done by scanning the surface and running the equations used to compare two surfaces.  The numbers then tell us where the best fit exists.

The methodology I developed for population reviews utilizes in the latter task and adds a variety statistical tools to the methodology in order to quantify when a change in statistically significant in terms of change or differences.  We can also term this indicator value a sensitivity index.  One could begin a query by stating that only more than a 3% change should be considered statistically significant, thereby allowing for 3% error in the analytic method.  Or true statistically significant values can be used assuming the right equations are in place for engaging in this comparitive analysis, keeping the desire to only illustrate statistically significant differences between the two profiles or transects whenever you are surveying or monitoring the outcomes for your project at hand.

The next section details how a special method was developed for engaging in a statistical significance evaluation of two lines or surfaces using this particular method above.

.

IP Background.  The entire methodology developed for this work is self-created and proprietary in nature, copyrighted, and not available for any professional use at this point.  This methodology is now about 10 years in age, with several generations of development over the years.  For now, there are no plans to release the details of this formula or the series of methodologies I developed to produce my results, which was the case for the hexagon grid analysis. 

This analytic method was designed specifically for exceptionally large N groups, with numbers of primary metrics rows amounting to millions or more.   This method is designed for comparing very large population (or even very small) to exceptionally large populations.  It can be used to compare the numbers and types in people in one state engaging in oil/gas consumption relative to another state in terms of gender and age relations, per area of research selected, even by various subsets of products or expenditures involved. 

An addition tool was developed as well for testing and quantifying cost related outcomes, which will not be described due to the complexity of that tool and its underlying theory and formulas.  Suffice it to say, this methodology is applicable to cost- and other population related metrics and has no parallels in terms of resulting in a product that defines the entire population’s statistical state, in terms of exceptionally small theoretical groups, which in this case are age and gender defined.  

Introduction.  Using one of the largest population based activities the current market place relates to as an example, health care, we can see how the population age-gender metrics tool can be applied across the board.

Let us assume for the minute that company 1 at has an excellent database designed for work with primarily prescription drug related information.  Its database is managed by a series of SQLs that provide an excellent platform for querying the data and pulling information on the specific of what you are searching for.   It has a very robust program designed to calculate hundreds of metrics at levels and forms limited only by the amount of data that is available in form and type (number vs. char, etc).   There are several dozens ways to subcategorize each and every datum in the medical or pharmaceutical dataset.  There are even more ways to interpret use over standard periods of time, ranging from cost per unit of drug use to number of refills per year per prescription, on a monthly or quarterly basis.  In between these two estimated values are such values as cost per unit taken, cost per 30 day period per rx, cost per week per patient, number of units required per given period of time, number of containers needed per PBM-store setting per patient with a given disease history.   The human part (error driving part) of this methodology is based on how we actively and cognitively define what is good and what is bad, such as deciding what is the best way to break down drugs into specific therapeutic categories, or how to best define the cost for a medication, such as by month/30D period, or per day, or per dose.

On the other hand, company 2 has data that require a significant amount of processing before it can be accessed by users.  These data undergo numerous predefined sqls in order to recategorize and recalculate the end results, which are a highly respected way of evaluating patient care within the medical system.  The advantages to this data are that there is also the possibility of evaluating this information at the clinical level, depending on the form of medical data presented.   In the end, your primary division into datasets is made based first on medical history, in which a systems based philosophy is used to categorize medical history based on both the clinical and PBM data.  This standard method of evaluating cases was first developed by Yale during the 1970s, and has been popular because it combines certain related actions together into a single dataset.  The drawbacks to this method are improper allocation of cost-utilization information (assignment of costs to the wrong reason they were accrued, due to other medical problems taking during the same period of time), and the requirement of specific time periods in which each case can be said to still be ongoing, or closed based on the medical actions taken (last office visit, case closure following a surgery, etc.).  The human (error driving) piece of this methodology is how these case/event differences are defined.

The differences between these two companies is that company 1 is limited to working at just one level (for example in-hospital), whereas company 2 at two distinct levels (in-hospital and clinics).  Whereas Company 1 provides a method to evaluating that is fairly simple to perform and direct in terms of its output, the method related to Company 2, since it requires pre-programming, is much harder to perform and therefore has less variation in how the information is managed for collection, making the outcome seem more reliable, multidimensional and systems based, but also less specific in terms of what specifically the outcomes are related to.  For example, if someone was to look specifically at a special ICD-related treatment protocol, in the first method this has multiple options on how to call and filter the information appropriately, whereas the second method has this filter already defined and active in the system, but one which cannot guarantee any accurate ICD-related relationship between the rx and the specific disease. 

If we wanted to look at Tourette’s syndrome for example, Company 1’s methodology allows for specific ICD and rx use, in multiple drug identifier forms.  Company 2’s methodology does not allow for direct ICD use, only systems use, in which subsets have to be developed, and then those subsets evaluated again for one time and multiple time relationships within each ICD , under the assumption that a one time ICD related use may or may not be just an indicator of diagnostic testing activities at the clinical level.  To confirm an ICD diagnosis, if we cannot emply the national HEDIS/NCQA method for pulling these cases into a unique dataset, then we then have to take a look at the rx use level, assuming there is a drug the patient has been prescribed for Tourette’s syndrome.  Neither of these two methods are perfect, but one is easier to accomplish than the other and therefore can be run automatically.

Ideally there is also the Company 3 option that is available, in which Prescription Drug, In-hospital and Office related Clinical data are available in independent datasets, without the subcategorizing and redefinition of values required by the Company 2 methodology.  Such a method however requires still more work to produce the final result that Company 1’s products generate.  So how do we choose between the two?  Company 1 methodologies have one set of applications and Company 2 methodologies have another set of applications.   It is up to the customer in need of this information to decide which way of accomplishing this  is best.  It is up to the statistician to determine which ways are most accurate for measuring what specific outcomes are in need of being evaluated.

If it is cost that is driving this need for such studies, Company 1 is easy to make use of and Company 2 perhaps too cumbersome and time-consuming.  If it is public health that is the issue, Company 2 is the best way to go, in such a way that adequate performance tools are developed in order to assure that measurements can be made in an accurate and truthful manner, in a fairly automated fashion.    In a study of prescription drug utilization compared with clinical utilization related costs and related activities, there tends to be a 4:1 to 10:1 clinical:rx cost relationship.   this means that evaluating prescription costs alone is kind of the half-blind research approach to tackling this elephant in population health analysis.  Only the blind man is telling us what the population health issues are, not the deaf, anosmic, ageusic, nonproprioceptive or non-tactile coresearchers.

The program I developed works at any of these three levels since the baseline dataset required for such evaluations relies solely upon a very specific way of interpreting people by age and gender, based on age in one year increments, treating age as a continuous surface across a plane that can be evaluated in much the same way that transect analyses are performed on such things as riverbed elevation above sea level transects or cross-range topography transects using Digital elevation models.  Since numbers are just numbers, the evaluation of an age-gender pyramid can be interpreted much like any continuously changing  linear set of data.  We can look at the before and after, compare one transect to another,  or determine a way to quantify the amount of difference existing between line 1 and line 2. 

The main feature of the formulas I like to use is that these formulas search for significant differences, not just amount of difference.  Significant difference is unique from other surface trend analyses formulas in that it is employed in order to define where important differences exist, not just due to size but also due to likelihood that these differences may or may not exist due to simple chance related outcomes.  Based on variances in age and gender figures, we can tell when a 50% change in N from group 1 and group 2 is significant or not, or in other words is due to chance or not.  Is a change in the % and n of people less than 18 years of age, for example, from 20% to 30% significant for the particular population you are looking at?  If it is significantly different based on the analytic method being used, then that means that cost projections for that larger group more than likely will be higher, as a consequence of N, not as a consequence of chance.  If the significant differences is not statistically relevant, then this means that difference in dollar value is due to probabilities and nothing more, meaning less attention has to be paid to this problem.   applying this to a true set of numbers for the two populations, a 20% vs. 30% difference is significant when it involves 1 million people, versus 10,000 people, this is due to the possible age-gender value variances each group can produce.

This also demonstrate why this methodology is meant to be used by very large companies with large datasets, not companies with the largest N in their typical studies measuring about 40,000 or less.  There is a 95% CI to this method for N = 40,000, a 99% CI for N=80,000 (100,00 is even better).  But this assumption assume normalized distributions, which mostly take place at N>>100,000.   The formulas are best applied to exceptionally large datasets.  The higher the N the more reliable the outcomes suggested by the study.

This next formula that I use to evaluate deltas was developed based on some formulas I wrote up back in 1997 in order to analyze three dimensionality.  The initial research question at the time was ‘how does land surface 1 differ from land surface 2 over time?’  You have two surfaces, with wear and tear demonstrated over time on certain parts of them by changes in surface topography.  You use this type of formula to make sure the two places being reviewed are essentially the same place with slight changes over time, or to measure how much change occurred temporally and where these changes that took place are statistically significant in terms of size and amounts of change. 

The amount of similarity between these two places is thus what is being measured, and is what a formula needed to be developed for that would explain the amount of similarity.  This formula set could then be related to several dimensions (x, and then y and z), to measure amount of identical form remaining.  This same formula type can be used to compare people’s faces.  It can be used in 3D form to evaluate both profile and depth of a face (nose size, eye cavity form, eyebrow ridges, forehead planarity, chin, etc.).  The research question is how do we develop a formula that will tell you when a change in the surface is significant and to what degree is it significant?  For example, a person could have had a nose job and a chin sculpting done, but have identical eye cavity depressions and ridges (items less likely to be easily changed, versus eye lid form and shape).  We need a formula that tells you where the changes exist and to what degree are they different from each other.

Now, reduce this form of analysis down to two dimensions, focusing on transects.  One can look at the transect of a land surface and determine where erosion has taken place above on the edge of a mountain and determine where the alluvial fan formed by gravel, sand and soil at the mountain base has widened and by how much, over a given period of time.    This method is much like comparing just the profiles of the noses and eye brow ridges on two faces, before and after plastic surgery.  This method can also be applied to curves with constantly varying shape and form.   A migrating ridge on a curve can be identified, or the amount of difference seen in two separate curves that seem very similar can be evaluated, and that difference determined whether or not to be statistically significant.   This method is applicable to analyses of age-gender-n population pyramid related curves.   One can use this method to compare two population age-gender-n curves. 

I developed a number of way to test the population age-gender curves over the years.  Back in the early to mid 2000s these were used to analyze statewide statistics pertaining to health insured populations (see multipage section on HEDIS/NCQA work performance in BIOSTATISTICS/Quality Assurance/Population Health and Disease Monitoring. . . for more).  At first I was just trying to define the populations included and excluded from my analyses statewide, but I realized that by applying this methodology to specific  subsets of populations, the result population profiles could be used to explain the findings made at various clinical levels.  Since formulas as simple relationships between numbers, sets and subsets, I realized this methodology that I was once employing at remote sensing and surveillance id recognition level between 1995 approximately and 1997 had applications elsewhere.

Looking simply at percent comparisons is one way to evaluate two age-gender-n graphing techniques, but finding out where there is a statistically significant difference between the two bumps or ridges is a much harder task to perform.  This is what my formulas and methods of engaging in statistical evaluation of surfaces was developed for.  You can have two populations, one with a fairly large population of kids, another with a smaller population size for kids with gender related differences (i.e. more young potential mothers <18), and you need to know whether or not this difference in kids is going to be statistically significant.  It there are statistically significant differences, this may suggest that the two populations could behave differently in some sort of statistically significant cost-related way as well, in turn suggesting the need for additional intervention activities pertaining to that age group which are different for the two groups (i.e. more allocation of teachers and money of health education programs, or making future classroom size projections related to increased needs for health education programs). 

. . . to be continued

 

“. . . the solutions to our problems lie outside the box.”

Aviation Week & Space Technology, July 1975

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s