LinkSolv 8.3 Help Pages and User Guide

Outcome Tests Tab

Outcome Tests Tab
1

Where am I?

1. Where am I?
 
Outcome Tests -- Guidelines
Outcome Testing. Review match results in order to validate a current linkage model or improve your linkage model for use in future matches. Use the Outcome Test Tab to test for comparisons with dependent outcomes (Outcome Tests). Test results should not be used to revise the match that is the basis for the results.
First Outcome Test Example. The first outcome test suggests that CrashCounty compared to SceneCounty (event location) and StdDate compared to StdDateCall (event date) have independent comparison outcomes on either both pairs (p Value for M = 0.90) and Unmatched pairs (p Value for U = 0.26). Consequently, no correction for dependent outcomes is necessary and all X values for these fields in the match specifications should be 1.00. This is expected because reporting errors in the simulated data are random and independent.
Second Outcome Test Example. The second outcome test suggests that StdDobD (day of birth) comparison and StdDobM (month of birth) comparison have significantly dependent comparison outcomes on Matched pairs (p Value for M < 0.01), and that Disagree X equal to 0.68 would correct for the observed dependency. The test results suggest that misreporting of day of birth is correlated with misreporting of month of birth. This is expected because day of birth and month of birth are derived from date of birth. Consequently, when data of birth is reported incorrectly in the simulated data there is a high probability that both day of birth and month of birth are incorrect on the same record. This may not be the case in real data if most reporting errors are single typos during data entry.
Set Agree X < 1.0 for Dependent Comparison Pairs. Agree X values must be set to the same value less than 1.00 in your future match specifications for any dependent comparison pairs even though the test results show 1.00 (say 0.99 for StdDobD and StdDobM comparisons). This is necessary so that the Markov Chain Monte Carlo (MCMC) linkage algorithm will re-estimate the correct pairs of X factors.
Measuring Dependent Outcomes. P values measure whether apparent dependencies might happen by chance but not whether the dependencies are weak or strong. For the latter, you have to look at a different measure. You could measure r values as with linear regressions with numeric variables but they don’t apply here. Instead, we measure uncertainty coefficients that apply to categorical values like Agree, Disagree. Uncertainty coefficients (and X factors derived from them) measure the strength of dependencies but don’t tell you anything about statistical significance. This is why you have to look at both. You only have to make a correction (set your X factors less than 1.00) if the p values indicate that a dependency is statistically significant so that correction is needed. Even if the p values are very small, the X factors might indicate that the dependencies are very weak so that only small corrections are needed.
2

Outcome Test

2. Outcome Test
 
Fields from A. For each row, select a field from Source A from the drop down list.
Fields from B. For each row, select a field from Source B from the drop down list.
Compare Methods. For each row, select a comparison method from the drop down list.
Tols Lo, Hi. For each row, enter low and high tolerances for agreements.
Test Outcomes. Click on Test Outcomes to run the test. This may take a few minutes.
p Val M:U. Test results show Chi-Square p values for matched record pairs (M) on the first row and for unmatched record pairs (U) on the second row.
Log Reg M:U Inter. Test results show Agree X values for specified comparison method and tolerance.
Log Reg M:U Coeff. Test results show Disagree X values for specified comparison method and tolerance.
 
3

Test Outcomes button

3. Test Outcomes button
 
Test Outcomes. Click on Test Outcomes to run the test. This may take a few minutes.
Authored with help of Dr.Explain