Record linkage

Record linkage (RL) refers to the task of finding records in a data set that refer to the same entity across different data sources (e.g., data files, books, websites, databases). Record linkage is necessary when joining data sets based on entities that may or may not share a common identifier (e.g.,database keyURINational identification number), as may be the case due to differences in record shape, storage location, and/or curator style or preference. A data set that has undergone RL-oriented reconciliation may be referred to as being cross-linked. Record Linkage is called Data Linkage in many jurisdictions, but is the same process.

History[edit]

The initial idea of record linkage goes back to Halbert L. Dunn in his 1946 article titled "Record Linkage" published in the American Journal of Public Health.[1] Howard Borden Newcombe laid the probabilistic foundations of modern record linkage theory in a 1959 article in Science,[2] which were then formalized in 1969 by Ivan Fellegi and Alan Sunter who proved that the probabilistic decision rule they described was optimal when the comparison attributes were conditionally independent. Their pioneering work "A Theory For Record Linkage"[3] remains the mathematical foundation for many record linkage applications even today.

Since the late 1990s, various machine learning techniques have been developed that can, under favorable conditions, be used to estimate the conditional probabilities required by the Fellegi-Sunter (FS) theory. Several researchers have reported that the conditional independence assumption of the FS algorithm is often violated in practice; however, published efforts to explicitly model the conditional dependencies among the comparison attributes have not resulted in an improvement in record linkage quality.[citation needed]

Record linkage can be done entirely without the aid of a computer, but the primary reasons computers are often used for record linkage are to reduce or eliminate manual review and to make results more easily reproducible. Computer matching has the advantages of allowing central supervision of processing, better quality control, speed, consistency, and better reproducibility of results.;[4]

Naming conventions[edit]

"Record linkage" is the term used by statisticians, epidemiologists, and historians, among others, to describe the process of joining records from one data source with another that describe the same entity. Commercial mail and database applications refer to it as "merge/purge processing" or "list washing".Computer scientists often refer to it as "data matching" or as the "object identity problem". Other names used to describe the same concept include: "coreference/entity/identity/name/record resolution", "entity disambiguation/linking", "duplicate detection", "deduplication", "record matching", "(reference) reconciliation", "object identification", "data/information integration", "entity resolution" and "conflation".[5] This profusion of terminology has led to few cross-references between these research communities.[6][7]

While they share similar names, record linkage and Linked Data are two separate concepts. Whereas record linkage focuses on the more narrow task of identifying matching entities across different data sets, Linked Data focuses on the broader methods of structuring and publishing data to facilitate the discovery of related information.

Methods[edit]

Data preprocessing[edit]

Record linkage is highly sensitive to the quality of the data being linked, so all data sets under consideration (particularly their key identifier fields) should ideally undergo a data quality assessment prior to record linkage. Many key identifiers for the same entity can be presented quite differently between (and even within) data sets, which can greatly complicate record linkage unless understood ahead of time. For example, key identifiers for a man named William J. Smith might appear in three different data sets as so:

Data set Name Date of birth City of residence
Data set 1William J. Smith1/2/73Berkeley, California
Data set 2Smith, W. J.1973.1.2Berkeley, CA
Data set 3Bill SmithJan 2, 1973Berkeley, Calif.

In this example, the different formatting styles lead to records that look different but in fact all refer to the same entity with the same logical identifier values. Most, if not all, record linkage strategies would result in more accurate linkage if these values were first normalized or standardized into a consistent format (e.g., all names are "Surname, Given name", and all dates are "YYYY/MM/DD"). Standardization can be accomplished through simple rule-baseddata transformations or more complex procedures such as lexicon-based tokenization and probabilistic hidden Markov models.[8] Several of the packages listed in the Software Implementations section provide some of these features to simplify the process of data standardization.

Identity resolution[edit]

Identity resolution is an operational intelligence process, typically powered by an identity resolution engine or middleware, whereby organizations can connect disparate data sources with a view to understanding possible identity matches and non-obvious relationships across multiple data silos. It analyzes all of the information relating to individuals and/or entities from multiple sources of data, and then applies likelihood and probability scoring to determine which identities are a match and what, if any, non-obvious relationships exist between those identities.

Identity resolution engines are typically used to uncover riskfraud, and conflicts of interest, but are also useful tools for use within Customer Data Integration (CDI) and Master Data Management (MDM) requirements. Typical uses for identity resolution engines include terrorist screening, insurance fraud detection, USA Patriot Act compliance, Organized retail crime ring detection and applicant screening.

For example: Across different data silos - employee records, vendor data, watch lists, etc. - an organization may have several variations of an identity named ABC, which may or may not be the same individual. These entries may, in fact, appear as ABC1, ABC2, or ABC3 within those data sources. By comparing similarities between underlying attributes such as addressdate of birth, or social security number, the user can eliminate some possible matches and confirm others as very likely matches.

Identity resolution engines then apply rules, based on common sense logic, to identify hidden relationships across the data. In the example above, perhaps ABC1 and ABC2 are not the same individual, but rather two distinct people who share common attributes such as address or phone number.

Data Matching[edit]

While entity resolution solutions include data matching technology, many data matching offerings do not fit the definition of identity (or entity) resolution. Here are four factors that distinguish entity resolution from data matching, according to John Talburt, director of the UALR Center for Advanced Research in Entity Resolution and Information Quality:

  • Works with both structured and unstructured records, and it entails the process of extracting references when the sources are unstructured or semi-structured
  • Uses elaborate business rules and concept models to deal with missing, conflicting, and corrupted information
  • Utilizes non-matching, asserted linking (associate) information in addition to direct matching
  • Uncovers non-obvious relationships and association networks (i.e. who's associated with whom)

In contrast to data quality products, more powerful identity resolution engines also include a rules engine and workflow process, which apply business intelligence to the resolved identities and their relationships. These advanced technologies make automated decisions and impact business processes in real time, limiting the need for human intervention.

Deterministic record linkage[edit]

The simplest kind of record linkage, called deterministic or rules-based record linkage, generates links based on the number of individual identifiers that match among the available data sets.[9] Two records are said to match via a deterministic record linkage procedure if all or some identifiers (above a certain threshold) are identical. Deterministic record linkage is a good option when the entities in the data sets are identified by a common identifier, or when there are several representative identifiers (e.g., name, date of birth, and sex when identifying a person) whose quality of data is relatively high.

As an example, consider two standardized data sets, Set A and Set B, that contain different bits of information about patients in a hospital system. The two data sets identify patients using a variety of identifiers: Social Security Number (SSN), name, date of birth (DOB), sex, and ZIP code (ZIP). The records in two data sets (identified by the "#" column) are shown below:

Data Set # SSN Name DOB Sex ZIP
Set A1000956723Smith, William1973/01/02Male94701
2000956723Smith, William1973/01/02Male94703
3000005555Jones, Robert1942/08/14Male94701
4123001234Sue, Mary1972/11/19Female94109
Set B1000005555Jones, Bob1942/08/14  
2 Smith, Bill1973/01/02Male94701

The most simple deterministic record linkage strategy would be to pick a single identifier that is assumed to be uniquely identifying, say SSN, and declare that records sharing the same value identify the same person while records not sharing the same value identify different people. In this example, deterministic linkage based on SSN would create entities based on A1 and A2; A3 and B1; and A4. While A1, A2, and B2 appear to represent the same entity, B2 would not be included into the match because it is missing a value for SSN.

Handling exceptions such as missing identifiers involves the creation of additional record linkage rules. One such rule in the case of missing SSN might be to compare name, date of birth, sex, and ZIP code with other records in hopes of finding a match. In the above example, this rule would still not match A1/A2 with B2 because the names are still slightly different: standardization put the names into the proper (Surname, Given name) format but could not discern "Bill" as a nickname for "William". Running names through a phonetic algorithm such as SoundexNYSIIS, or metaphone, can help to resolve these types of problems (though it may still stumble over surname changes as the result of marriage or divorce), but then B2 would be matched only with A1 since the ZIP code in A2 is different. Thus, another rule would need to be created to determine whether differences in particular identifiers are acceptable (such as ZIP code) and which are not (such as date of birth).

As this example demonstrates, even a small decrease in data quality or small increase in the complexity of the data can result in a very large increase in the number of rules necessary to link records properly. Eventually, these linkage rules will become too numerous and interrelated to build without the aid of specialized software tools. In addition, linkage rules are often specific to the nature of the data sets they are designed to link together. One study was able to link the Social Security Death Master File with two hospital registries from the Midwestern United States using SSN, NYSIIS-encoded first name, birth month, and sex, but these rules may not work as well with data sets from other geographic regions or with data collected on younger populations.[10] Thus, continuous maintenance testing of these rules is necessary to ensure they continue to function as expected as new data enter the system and need to be linked. New data that exhibit different characteristics than was initially expected could require a complete rebuilding of the record linkage rule set, which could be a very time-consuming and expensive endeavor.

Probabilistic record linkage[edit]

Probabilistic record linkage, sometimes called fuzzy matching (also probabilistic merging or fuzzy merging in the context of merging of databases), takes a different approach to the record linkage problem by taking into account a wider range of potential identifiers, computing weights for each identifier based on its estimated ability to correctly identify a match or a non-match, and using these weights to calculate the probability that two given records refer to the same entity. Record pairs with probabilities above a certain threshold are considered to be matches, while pairs with probabilities below another threshold are considered to be non-matches; pairs that fall between these two thresholds are considered to be "possible matches" and can be dealt with accordingly (e.g., human reviewed, linked, or not linked, depending on the requirements). Whereas deterministic record linkage requires a series of potentially complex rules to be programmed ahead of time, probabilistic record linkage methods can be "trained" to perform well with much less human intervention.

Many probabilistic record linkage algorithms assign match/non-match weights to identifiers by means of two probabilities called u and m. The probability is the probability that an identifier in two non-matching records will agree purely by chance. For example, the u probability for birth month (where there are twelve values that are approximately uniformly distributed) is 1/12 ≈ 0.083; identifiers with values that are not uniformly distributed will have different u probabilities for different values (possibly including missing values). The m probability is the probability that an identifier in matching pairs will agree (or be sufficiently similar, such as strings with high Jaro-Winkler distance or low Levenshtein distance). This value would be 1.0 in the case of perfect data, but given that this is rarely (if ever) true, it can instead be estimated. This estimation may be done based on prior knowledge of the data sets, by manually identifying a large number of matching and non-matching pairs to "train" the probabilistic record linkage algorithm, or by iteratively running the algorithm to obtain closer estimations of the m probability. If a value of 0.95 were to be estimated for the m probability, then the match/non-match weights for the birth month identifier would be:

Outcome Proportion of links Proportion of non-links Frequency ratio Weight
Matchm = 0.95u ≈ 0.083m/u ≈ 11.4ln(m/u)/ln(2) ≈ 3.51
Non-match1−m = 0.051-u ≈ 0.917(1-m)/(1-u) ≈ 0.0545ln((1-m)/(1-u))/ln(2) ≈ -4.20

The same calculations would be done for all other identifiers under consideration to find their match/non-match weights. Then, every identifier of one record would be compared with the corresponding identifier of another record to compute the total weight of the pair: the match weight is added to the running total whenever a pair of identifiers agree, while the non-match weight is added (i.e. the running total decreases) whenever the pair of identifiers disagrees. The resulting total weight is then compared to the aforementioned thresholds to determine whether the pair should be linked, non-linked, or set aside for special consideration (e.g. manual validation).[11]

Determining where to set the match/non-match thresholds is a balancing act between obtaining an acceptable sensitivity (or recall, the proportion of truly matching records that are linked by the algorithm) and positive predictive value (or precision, the proportion of records linked by the algorithm that truly do match). Various manual and automated methods are available to predict the best thresholds, and some record linkage software packages have built-in tools to help the user find the most acceptable values. Because this can be a very computationally demanding task, particularly for large data sets, a technique known as blocking is often used to improve efficiency. Blocking attempts to restrict comparisons to just those records for which one or more particularly discriminating identifiers agree, which has the effect of increasing the positive predictive value (precision) at the expense of sensitivity (recall).[11] For example, blocking based on a phonetically coded surname and ZIP code would reduce the total number of comparisons required and would improve the chances that linked records would be correct (since two identifiers already agree), but would potentially miss records referring to the same person whose surname or ZIP code was different (due to marriage or relocation, for instance). Blocking based on birth month, a more stable identifier that would be expected to change only in the case of data error, would provide a more modest gain in positive predictive value and loss in sensitivity, but would create only twelve distinct groups which, for extremely large data sets, may not provide much net improvement in computation speed. Thus, robust record linkage systems often use multiple blocking passes to group data in various ways in order to come up with groups of records that should be compared to each other.

Machine learning[edit]

In recent years, a variety of machine learning techniques have been used in record linkage. It has been recognized[12] that probabilistic record linkage is equivalent to the "Naive Bayes" algorithm in the field of machine learning,[13] and suffers from the same assumption of the independence of its features (an assumption that is typically not true).[14][15] Higher accuracy can often be achieved by using various other machine learning techniques, including a single-layer perceptron.[12]

Mathematical model[edit]

In an application with two files, A and B, denote the rows (records) by \alpha (a) in file A and \beta (b) in file B. Assign K characteristics to each record. The set of records that represent identical entities is defined by

 M = \left\{ (a,b); a=b; a \in A; b \in B \right\}

and the complement of set M, namely set U representing different entities is defined as

 U = \{ (a,b); a \neq b; a \in A, b \in B \} .

A vector, \gamma is defined, that contains the coded agreements and disagreements on each characteristic:

 \gamma \left[ \alpha ( a ), \beta ( b ) \right] = \{ \gamma^{1} \left[ \alpha ( a ) , \beta ( b ) \right] ,...,	\gamma^{K} \left[ \alpha ( a ), \beta ( b ) \right] \}

where K is a subscript for the characteristics (sex, age, marital status, etc.) in the files. The conditional probabilities of observing a specific vector \gamma given (a, b) \in M(a, b) \in U are defined as

 m(\gamma) = P \left\{ \gamma \left[ \alpha (a), \beta (b) \right] | (a,b) \in M \right\} = \sum_{(a, b) \in M} P \left\{\gamma\left[ \alpha(a), \beta(b) \right] \right\} \cdot                 P \left[ (a, b) | M\right]

and

 u(\gamma) = P \left\{ \gamma \left[ \alpha (a), \beta (b) \right] | (a,b) \in U \right\} = \sum_{(a, b) \in U} P \left\{\gamma\left[ \alpha(a), \beta(b) \right] \right\} \cdot                 P \left[ (a, b) | U\right], respectively.[3]

Applications[edit]

Master data management[edit]

Most Master data management (MDM) products use a record linkage process to identify records from different sources representing the same real-world entity. This linkage is used to create a "golden master record" containing the cleaned, reconciled data about the entity. The techniques used in MDM are the same as for record linkage generally. MDM expands this matching not only to create a "golden master record" but to infer relationships also. (i.e. a person has a same/similar surname and same/similar address, this might imply they share a household relationship).

Data warehousing and business intelligence[edit]

Record linkage plays a key role in data warehousing and business intelligence. Data warehouses serve to combine data from many different operational source systems into one logical data model, which can then be subsequently fed into a business intelligence system for reporting and analytics. Each operational source system may have its own method of identifying the same entities used in the logical data model, so record linkage between the different sources becomes necessary to ensure that the information about a particular entity in one source system can be seamlessly compared with information about the same entity from another source system. Data standardization and subsequent record linkage often occur in the "transform" portion of the extract, transform, load (ETL) process.

Historical research[edit]

Record linkage is important to social history research since most data sets, such as census records and parish registers were recorded long before the invention of National identification numbers. When old sources are digitized, linking of data sets is a prerequisite for longitudinal study. This process is often further complicated by lack of standard spelling of names, family names that change according to place of dwelling, changing of administrative boundaries, and problems of checking the data against other sources. Record linkage was among the most prominent themes in the History and computing field in the 1980s, but has since been subject to less attention in research.[citation needed]

Medical practice and research[edit]

Record linkage is an important tool in creating data required for examining the health of the public and of the health care system itself. It can be used to improve data holdings, data collection, quality assessment, and the dissemination of information. Data sources can be examined to eliminate duplicate records, to identify under-reporting and missing cases (e.g., census population counts), to create person-oriented health statistics, and to generate disease registries and health surveillance systems. Some cancer registries link various data sources (e.g., hospital admissions, pathology and clinical reports, and death registrations) to generate their registries. Record linkage is also used to create health indicators. For example, fetal and infant mortality is a general indicator of a country's socioeconomic development, public health, and maternal and child services. If infant death records are matched to birth records, it is possible to use birth variables, such as birth weight and gestational age, along with mortality data, such as cause of death, in analyzing the data. Linkages can help in follow-up studies of cohorts or other groups to determine factors such as vital status, residential status, or health outcomes. Tracing is often needed for follow-up of industrial cohorts, clinical trials, and longitudinal surveys to obtain the cause of death and/or cancer. An example of a successful and long-standing record linkage system allowing for population-based medical research is the Rochester Epidemiology Project based inRochester, Minnesota.[16]

Criticism of existing software implementations[edit]

The main reasons cited are:

  • Project costs: costs typically in the hundreds of thousands of dollars
  • Time: lack of enough time to deal with large-scale data-cleansing software
  • Security: concerns over sharing information, giving an application access across systems, and effects on legacy systems

See also[edit]

  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Quartz是OpenSymphony开源组织在Job scheduling领域又一个开源项目,它可以与J2EE与J2SE应用程序相结合也可以单独使用。Quartz可以用来创建简单或为运行十个,百个,甚至是好几万个Jobs这样复杂的程序。Jobs可以做成标准的Java组件或 EJBs。 Quartz的优势: 1、Quartz是一个任务调度框架(库),它几乎可以集成到任何应用系统中。 2、Quartz是非常灵活的,它让您能够以最“自然”的方式来编写您的项目的代码,实现您所期望的行为 3、Quartz是非常轻量级的,只需要非常少的配置 —— 它实际上可以被跳出框架来使用,如果你的需求是一些相对基本的简单的需求的话。 4、Quartz具有容错机制,并且可以在重启服务的时候持久化(”记忆”)你的定时任务,你的任务也不会丢失。 5、可以通过Quartz,封装成自己的分布式任务调度,实现强大的功能,成为自己的产品。6、有很多的互联网公司也都在使用Quartz。比如美团 Spring是一个很优秀的框架,它无缝的集成了Quartz,简单方便的让企业级应用更好的使用Quartz进行任务的调度。   课程说明:在我们的日常开发中,各种大型系统的开发少不了任务调度,简单的单机任务调度已经满足不了我们的系统需求,复杂的任务会让程序猿头疼, 所以急需一套专门的框架帮助我们去管理定时任务,并且可以在多台机器去执行我们的任务,还要可以管理我们的分布式定时任务。本课程从Quartz框架讲起,由浅到深,从使用到结构分析,再到源码分析,深入解析Quartz、Spring+Quartz,并且会讲解相关原理, 让大家充分的理解这个框架和框架的设计思想。由于互联网的复杂性,为了满足我们特定的需求,需要对Spring+Quartz进行二次开发,整个二次开发过程都会进行讲解。Spring被用在了越来越多的项目中, Quartz也被公认为是比较好用的定时器设置工具,学完这个课程后,不仅仅可以熟练掌握分布式定时任务,还可以深入理解大型框架的设计思想。
[入门数据分析的第一堂课]这是一门为数据分析小白量身打造的课程,你从网络或者公众号收集到很多关于数据分析的知识,但是它们零散不成体系,所以第一堂课首要目标是为你介绍:Ø  什么是数据分析-知其然才知其所以然Ø  为什么要学数据分析-有目标才有动力Ø  数据分析的学习路线-有方向走得更快Ø  数据分析的模型-分析之道,快速形成分析思路Ø  应用案例及场景-分析之术,掌握分析方法[哪些同学适合学习这门课程]想要转行做数据分析师的,零基础亦可工作中需要数据分析技能的,例如运营、产品等对数据分析感兴趣,想要更多了解的[你的收获]n  会为你介绍数据分析的基本情况,为你展现数据分析的全貌。让你清楚知道自己该如何在数据分析地图上行走n  会为你介绍数据分析的分析方法和模型。这部分是讲数据分析的道,只有学会底层逻辑,能够在面对问题时有自己的想法,才能够下一步采取行动n  会为你介绍数据分析的数据处理和常用分析方法。这篇是讲数据分析的术,先有道,后而用术来实现你的想法,得出最终的结论。n  会为你介绍数据分析的应用。学到这里,你对数据分析已经有了初步的认识,并通过一些案例为你展现真实的应用。[专享增值服务]1:一对一答疑         关于课程问题可以通过微信直接询问老师,获得老师的一对一答疑2:转行问题解答         在转行的过程中的相关问题都可以询问老师,可获得一对一咨询机会3:打包资料分享         15本数据分析相关的电子书,一次获得终身学习

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值