In my previous post, we explored how changes to a health center’s workflows can cause mapping issues that must be fixed with new data extracts. In this post, we’ll discuss a different type of mapping issue that affects a center’s data when it reaches Azara: normalization.
The concept of data normalization is fairly basic (think “apples to apples” comparisons). We should be able to categorize all observations of a certain type and give them the same name. Take an A1c test, for example. In order to report on whether a diabetic patient’s A1c is under control, you must identify all A1c tests. This may seem easy, and it is – for a human. But, consider how an A1c test is ordered: If a test is done at the point of care, you might place an order for “In-House A1c (POCT);” If it’s going to Quest Diagnostics, it might be an order for “Glycosylated Hemoglobin.” LabCorp might call it “Hemoglobin A1c,” and may have two different orders, depending on whether you need the estimated average glucose (EAG) in addition to the A1c.
As a healthcare professional, you intuitively know these are the same tests – but a computer doesn’t. We maintain “mappings” within DRVS the varying names our clients use for A1cs, LDLs, and every observation we report on. Whenever we’re presented with something new, we must “map” it to the appropriate category within DRVS. It’s a manual process, but we’re constantly developing ways to make it easier.
Let’s touch on one more normalization issue: the concept of standardization – the idea that there should be a single unambiguous method to identify health data. Currently, there are many standards to choose from. AMA, WHO, IHTSDO, and the Regenstrief Institute maintain various standards, such as CPT, ICD-10, SNOMED-CT, and LOINC. As long as multiple standards exist, normalization is needed to translate between them. However, standardization simplifies the mapping process, and helps drive toward interoperability.
Evan Weixel is an engineer at Azara Healthcare.