Who Assesses the Assessors? Studying Prescription Drug Monitoring Program Data Quality

• Bookmarks: 63


The current opioid epidemic is unprecedented in its scope, accounting for the deaths of 72,000 Americans in 2017 and surpassing deaths from car accidents and gun homicides combined. In response, policymakers have been attempting to find solutions. One attempted solution is the creation of Prescription Drug Monitoring Programs (PDMPs), databases tracking prescriptions by patient and provider, which can be accessed by providers prior to prescribing certain medications, including opioids. The Centers for Disease Control and Prevention have called PDMPs “the most promising state-level interventions to improve opioid prescribing, inform clinical practice, and protect patients at risk,” and PDMPs have been adopted state-wide in every state except Missouri. However, the effects of PDMP implementation have proven difficult to assess, as illustrated by a New England Journal of Medicine paper showing no association between PDMPs and opioid-related outcomes, such as the percentage of patients with four or more opioid prescribers, or the percentage of patients with daily medication doses higher than 120 mg.

Attempting to elucidate these contradictory claims, Horwitz et al. assessed whether data quality problems have led to such “inconclusive and contradictory results.” To do so, they examined the decisions that researchers must make when using PDMP datasets, created a novel research protocol, and finally applied it to various public data set sources to analyze the difficulties faced by researchers. In addition to detailing a lack of transparency in data set construction, researchers found that there were date discrepancies sometimes spanning years and that the data set used resulted in as much as a twofold difference in the resulting regression point estimates, raising serious concerns about the quality of these public data sources used to assess PDMPs.

One key aspect of this study was the authors’ publication of a transparent and reproducible research protocol, assembled by trained legal epidemiology researchers for use with PDMP data. This decision was built upon the authors’ realization that researchers using PDMP data most often use third-party, publicly available data, which often differ. Furthermore, it is difficult to assess the validity of any particular data source because “the source of the data published in the public databases themselves is often a black box.” Compounding this problem, the authors point out that even seemingly simple data, such as the date that law was “enacted” or the date a PDMP became “operational” can be difficult to determine. The authors conclude that “even experienced legal researchers have difficulty creating consistent and reliable measures of PDMP law enactment and operation dates.” The creation of transparent and reproducible research protocols, however, allows for future use of this protocol and discussions on protocol improvements, a significant step forward from the status quo.

The second key aspect of this study is comparing data from their protocol to those from publicly available data sets in order to demonstrate how widely the results may differ, sometimes changing results by an order of magnitude. In other words, the method of analysis is key in understanding the effect, if any, that PDMPs have had upon prescribing patterns. As one example, the association between PDMP enactment and the prevalence of disabled Medicare beneficiaries with prescription dispenses at five or more pharmacies can change from a significantly positive effect, no effect, or a significantly negative effect depending on the data set and protocol used.

The biggest lesson that policymakers can learn from these findings is that methodological details are vital to policy research, as is transparency in the creation of publicly available data. As the authors state themselves: “although it can be hard to trace details, it is often important to do so.” This is true not just in studying the effects of PDMPs, or in health policy research more generally, but in all policy research. For example, the authors point out similar issues with state welfare reform research in the 90s (Blank, 2002), where, like with PDMPs, data sets included many states over time, making it extremely difficult to track significant details of the data.

Although there is no single correct way to analyze data, the authors show how small differences in data analysis can cause large shifts in the results. As such, transparent and detailed methodologies are necessary for PDMP research to result in significant and applicable findings. Perhaps, as has been done by the authors here and in other fields, researchers should publish multiple results and the methodologies used for each, and then make the case for one methodology over the other. Doing so would better reflect the range of possibilities to policymakers, other researchers, and the public, while maintaining the relevance and applicability of the results.

Article source: Horwitz, Jill, Corey Davis, Lynn McClelland, Rebecca Fordon, and Ellen Meara. “The Problem of Data Quality in Analyses of Opioid Regulation: The Case of Prescription Drug Monitoring ProgramsNational Bureau of Economic Research Working Paper Series No. 24947 (2018).

Featured photo: cc/(Zinkevych, photo ID: 959453578, from iStock by Getty Images)

342 views
bookmark icon