Quality Is Critical In Today’s Data Deluge: Put Processes and Tools In Place For Robust Data Quality

By Rahul Mehta, senior vice president and head of data management proficiency, CitiusTech.

Rahul Mehta
Rahul Mehta

The sheer volume and variety of data, such as claims, EMRs, lab systems, and IoT now available to healthcare organizations is mind-boggling. The potential to pull data from these myriad sources to work for real-time care intervention, clinical quality improvement, and value-based payment models is unfolding fast.

Yet, as organizations seek to aggregate, normalize and draw insights from large and diverse data sets, the importance of data quality becomes apparent. Consider an activity as fundamental as identifying the correct patient. According to Black Book Research, roughly 33 percent of denied claims can be attributed to inaccurate patient identification, costing the average hospital $1.5 million in 2017.

For example, the average cost of repeated medical care due to inaccurate patient identification with a duplicate record is roughly $1,950 per inpatient stay and more than $800 per emergency department visit.

As data quality become more important, healthcare organizations need to understand the key characteristics that affect quality: accuracy, completeness, consistency, uniqueness and timeliness. However, data reliability and integrity also depend on other key factors, including data governance, de-duplication, metadata management, auditability and data quality rules.

With a strategic approach, healthcare organizations can employ a unified data strategy with strong governance for data quality across all data types, sources and use cases, giving them the ability to scale and extend to new platforms, systems and healthcare standards. The result is an approach that uses a combination of industry best-practices and technology tools to overcome common challenges and assure data quality for the long term.

Understanding Data Quality Challenges

Historically, providers and payers alike treated data quality as a peripheral issue, but that is no longer viable in today’s complex data ecosystems. First, there are a diversity and multiplicity of data sources and formats: EHRs, clinical systems, claims, consumer applications, and medical devices. Add to that, challenges associated with legacy applications, automation needs, interoperability, data standards and scalability.

Lastly, there are increasing numbers of use cases for clinical quality, utilization, risk management, regulatory submission, population health, and claims management that need to be supported.

Considering the current data environment, the downstream effects of data quality issues can be significant and costly. For example, in the case of patient matching as referenced above, something as common as two hospitals merging into the same health system, but following different data-entry protocols, can lead to duplicate and mis-matched patient records. It can also lead to critical patient data elements, such as date of birth, being documented differently by different facilities and then made available across multiple systems, in varying formats.

It’s easy to imagine how issues can cascade quickly across multiple critical clinical, financial and operational processes. The cost of fixing data issues increases exponentially as the data moves through a highly connected and interoperable ecosystem. Therefore, reconciling data effectively and creating a single and reliable source of truth is necessary for quality improvement, regulatory compliance, and profitability.

Build A Roadmap Around Your Organization’s Objectives

Before starting any data quality initiative, it is helpful for organizations to identify key use cases and the specific data quality characteristics that can have the maximum positive impact on their program. By starting with high-impact initiatives, organizations can build an incremental approach that expands to new use cases and data types over time.

As an initial step, common use cases can be categorized to understand which data characteristics are most important. For use cases around population health analytics, utilization management, and contract/network management, accuracy and completeness of data are essential parameters, while timeliness may be secondary.

These use cases don’t require real-time, streaming data to contribute to positive outcomes. However, having real-time or near real-time data becomes extremely important for clinical gap closure, clinical alerts, and point-of-care decision support.

Structure a Data Quality Program Around Best Practices and Technology Tools

With specific use cases and short- and long-term objectives in mind, the organization can build an effective data quality management program that combines industry best-practices and technology tools.

Strong data management processes in place allow healthcare organizations to identify and resolve quality issues at the source, rather than attempting to reconcile data across multiple systems once incorrect data has moved downstream. Data governance should include processes and procedures for de-duplication, metadata management, and auditing.

In addition, data quality issues can be reduced by leveraging industry data and interoperability standards, such as Fast Healthcare Interoperability Resources (FHIR), RxNorm, Consolidated Clinical Document Architecture (CCDA), and Logical Observation Identifiers Names and Codes (LOINC). Keeping processes in sync with the standards relevant to your organization and use cases ensures data quality management and interoperability work in tandem.

Healthcare organizations can also take advantage of the many technology tools available to create a robust, scalable and secure environment. Such tools add data quality checks, pre-defined data quality rules and best-practice remediation approaches. These technologies can ensure that data coming from external sources are universally accessible and usable, while automatically detecting and correcting data quality issues in real-time for accuracy, completeness, redundancy, duplicity, and logical correctness.

A Singular Standard for Data Quality

With a clear picture of what data quality means and its impact on clinical, financial and operational processes, healthcare organizations can build an effective, scalable program that meets short- and long-term objectives. By putting processes and tools in place to manage the accuracy, completeness, consistency, uniqueness and timeliness of data, organizations can maximize the impact on their core use cases for population health, clinical gap closure, clinical alerts, point-of-care decision support, and more.

This combination of industry best-practices and readily available technology tools results in a singular standard for data quality across all data types, sources and use cases – including the ability to scale to incorporate new platforms, systems and healthcare standards over time.


Write a Comment

Your email address will not be published. Required fields are marked *