Skip to main content
  • Review
  • Published:

Methods for the de-identification of electronic health records for genomic research

Abstract

Electronic health records are increasingly being linked to DNA repositories and used as a source of clinical information for genomic research. Privacy legislation in many jurisdictions, and most research ethics boards, require that either personal health information is de-identified or that patient consent or authorization is sought before the data are disclosed for secondary purposes. Here, I discuss how de-identification has been applied in current genomic research projects. Recent metrics and methods that can be used to ensure that the risk of re-identification is low and that disclosures are compliant with privacy legislation and regulations (such as the Health Insurance Portability and Accountability Act Privacy Rule) are reviewed. Although these methods can protect against the known approaches for re-identification, residual risks and specific challenges for genomic research are also discussed.

Electronic health records and the need for de-identification

Electronic health records (EHRs) are increasingly being used as a source of clinically relevant patient data for research [1, 2], including genome-wide association studies [3]. Often, research ethics boards will not allow data custodians to disclose identifiable health information without patient consent. However, obtaining consent can be challenging and there have been major concerns about the negative impact of obtaining patient consent on the ability to conduct research [4]. Such concerns are reinforced by the compelling evidence that requiring explicit consent for participation in different forms of health research can have a negative impact on the process and outcomes of the research itself [5–7]. For example, recruitment rates decline significantly when individuals are asked to consent; those who consent tend to be different from those who decline consent on a number of important demographic and socio-economic variables, hence potentially introducing bias in the results [8]; and consent requirements increase the cost of, and time for, conducting the research. Furthermore, often it is not practical to obtain individual patient consent because of the very large populations involved, the lack of a relationship between the researchers and the patients, and the time elapsed between data collection and the research study.

One approach to facilitate the disclosure of information for the purposes of genomic research, and to alleviate some of the problems documented above, is to de-identify data before disclosure to researchers or at the earliest opportunity afterwards [9, 10]. Many research ethics boards will waive the consent requirement if the first 'use' of the data is to de-identify it [11, 12].

The i2b2 project (informatics for integration of biology and the bedside) has developed tools for clinical investigators to integrate medical records and clinical research. A query tool in i2b2 allows the computation of cohort sizes in a privacy protective way, and a data export tool allows the extraction of de-identified individual-level data [13, 14]. Also, the eMerge network, which consists of five sites in the United States, is an example of integrated EHR and genetic databases [3]. The BioVU system at Vanderbilt University, a member of the eMerge network, links a biobank of discarded blood samples with EHR data, and information is disclosed for research purposes after de-identification [3, 15].

Here, I provide a description and critical analysis of de-identification methods that have been used in genomic research projects, such as i2b2 and eMerge. This is augmented with an overview of contemporary standards, best practices and recent de-identification methodologies.

De-identification: definitions and concepts

A database integrating clinical information from an EHR with a DNA repository is referred to here as a translational research information system (TRIS) for brevity [16]. It is assumed that the data custodian is extracting a particular set of variables on patients from a TRIS and disclosing that to a data recipient for research purposes, and that the data custodian will be performing the de-identification before the disclosure or at the earliest opportunity after disclosure. The concern for the data custodian is the risk that an adversary will try to re-identify the disclosed data.

Identity versus attribute disclosure

There are two kinds of re-identification that are of concern. The first is when an adversary can assign an identity to a record in the data disclosed from the TRIS. For example, the adversary would be able to determine that record number 7 belongs to a patient named 'Alice Smith'. This is called identity disclosure. The second type of disclosure is when an adversary learns something new about a patient in the disclosed data without knowing which specific record belongs to that patient. For example, if all 20-year-old female patients in the disclosed data who live in Ontario had a specific diagnosis, then an adversary does not need to know which record belongs to Alice Smith; if she is 20 years old and lives in Ontario then the adversary will discover something new about her: the diagnosis. This is called attribute disclosure.

All the publicly known examples of re-identification of personal information have involved identity disclosure [17–26]. Therefore, the focus is on identity disclosure because it is the type that is known to have occurred in practice.

Types of variable

The data in an EHR will include clinical information, and possibly socio-economic status information that may be collected from patients or linked in from external sources (such as the census). EHR information can be divided into four categories. The distinctions among these categories are important because they have an impact on the probability of re-identification and on suitable de-identification methods.

Directly identifying information

One or more direct identifiers can be used to uniquely identify an individual, either by themselves or in combination with other readily available information. For example, there are more than 200 people named 'John Smith' in Ontario, and therefore the name by itself would not be directly identifying, but in combination with the address it would be directly identifying information. Examples of directly identifying information include email address, health insurance card number, credit card number, and social insurance number.

Indirectly identifying relational information

Relational information can be used to probabilistically identify an individual. General examples include sex, geographic indicators (such as postal codes, census geography, or information about proximity to known or unique landmarks), and event dates (such as birth, admission, discharge, procedure, death, specimen collection, or visit/encounter).

Indirectly identifying transactional information

This is similar to relational information in that it can be used to probabilistically identify an individual. However, transactional information may have many instances per individual and per visit. For example, diagnosis codes and drugs dispensed would be considered transactional information.

Sensitive information

This is information that is rarely useful for re-identification purposes - for example, laboratory results.

For any piece of information, its classification into one of the above categories will be context dependant.

Relational and transactional information are referred to as quasi-identifiers. The quasi-identifiers represent the background knowledge about individuals in the TRIS that can be used by an adversary for re-identification. Without this background knowledge identity disclosure cannot occur. For example, if an adversary knows an individual's date of birth and postal code, then s/he can re-identify matching records in the disclosed data. If the adversary does not have such background knowledge about a person, then a date of birth and postal code in a database would not reveal the person's identity. Furthermore, because physical attributes and certain diagnoses can be inferred from DNA analysis (for example, gender, blood type, approximate skin pigmentation, a diagnosis of cystic fibrosis or Huntington's chorea), the DNA sequence data of patients known to an adversary can be used for phenotype prediction and subsequent re-identification of clinical records [27–29]. If an adversary has an identified DNA sequence of a target individual, this can be used to match and re-identify a sequence in the repository. Without an identified DNA sequence or reference sample as background knowledge, such an approach for re-identification would not work [16]. The manner and ease with which an adversary can obtain such background knowledge will determine the plausible methods of re-identification for a particular dataset.

Text versus structured data

Another way to consider the data in a TRIS is in terms of representation: structured versus free-form text. Some data elements in EHRs are in a structured format, which means that they have a pre-defined data type and semantics (for example, a date of birth or a postal code). There will also be plenty of free-form text in the form of, for example, discharge summaries, pathology reports, and consultation letters. Any realistic de-identification process has to deal with both types of data. The BioVU and i2b2 projects have developed and adapted tools for the de-identification of free-form text [15, 30].

De-identification standards

In the US, the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule provides three standards for the disclosure of health information without seeking patient authorization: the Safe Harbor standard (henceforth Safe Harbor), the Limited Dataset, and the statistical standard. Safe Harbor is a precise standard for the de-identification of personal health information when disclosed for secondary purposes. It stipulates the removal of 18 variables from a dataset as summarized in Box 1. The Limited Dataset stipulates the removal of only 16 variables, but also requires that the data recipient sign a data sharing agreement with the data custodian. The statistical standard requires an expert to certify that 'the risk is very small that the information could be used, alone or in combination with other reasonably available information, by an anticipated recipient to identify an individual who is a subject of the information'. Out of these three standards, the certainty and simplicity of Safe Harbor has made it attractive for data custodians.

Safe Harbor is also relevant beyond the US. For example, health research organizations and commercial organizations in Canada choose to use the Safe Harbor criteria to de-identify datasets [31, 32], Canadian sites conducting research funded by US agencies need to comply with HIPAA [33], and international guidelines for the public disclosure of clinical trials data have relied on Safe Harbor definitions [34].

However, Safe Harbor has a number of important disadvantages. There is evidence that it can result in the excessive removal of information useful for research [35]. At the same time it does not provide sufficient protection for many types of data, as illustrated below.

First, it does not explicitly consider genetic data as part of the 18 fields to remove or generalize. There is evidence that a sequence of 30 to 80 independent single nucleotide polymorphisms (SNPs) could uniquely identify a single person [36]. There is also a risk of re-identification from pooled data, where it is possible to determine whether an individual is in a pool of several thousand SNPs using summary statistics on the proportion of individuals in the case or control group and the corresponding SNP value [37, 38].

Second, Safe Harbor does not consider longitudinal data. Longitudinal data contain information about multiple visits or episodes of care. For example, let us consider the state inpatient database for California for the year 2007, which contains information on 2,098,578 patients. A Safe Harbor compliant dataset consisting only of the quasi-identifiers gender, year of birth, and year of admission has less than 0.03% of the records with a high probability of re-identification. A high probability of re-identification is defined as over 0.2. However, with two more longitudinal variables added, length of stay and time since last visit for each visit, then 16.57% of the records have a high probability of re-identification (unpublished observations). Thus, the second dataset also meets the Safe Harbor definition but has a markedly higher percentage of the population at risk of re-identification. Therefore, Safe Harbor does not ensure that the data are adequately de-identified. Longitudinal information, such as length of stay and time since last visit, may be known by neighbors, co-workers, relatives, and ex-spouses, and even the public for famous people.

Third, Safe Harbor does not deal with transactional data. For example, it has been shown that a series of diagnosis codes (International Statistical Classification of Diseases and Related Health Problems) for patients makes a large percentage of individuals uniquely identifiable [39]. An adversary who is employed by the healthcare provider could have access to the diagnosis codes and patient identity, which can be used to re-identify records disclosed from the TRIS.

Fourth, Safe Harbor does not take into account the sampling fraction - it is well established that sub-sampling can reduce the probability of re-identification [40–46]. For example, consider a cohort of 63,796 births in Ontario over 2004 to 2009 and three quasi-identifiers: maternal postal code, date of birth of baby, and mother's age. Approximately 96% of the records were unique on these three quasi-identifiers, making them highly identifiable. For research purposes, this dataset was de-identified to ensure that 5% or less of the records could be correctly re-identified by reducing the precision of the postal code to the first three characters, and the date of birth to year of birth. However, a cohort of 127,592 births de-identified in exactly the same way could have 10% of its records correctly re-identified. In this case the variables were exactly the same in the two cohorts but, because the sampling fraction varies, the percentage of records that can be re-identified doubles (from 5% to 10%, respectively).

Finally, other pieces of information that can re-identify individuals in free-form text and notes are not accounted for in Safe Harbor. The following example illustrates how I used this information to re-identify a patient. In a series of medical records that have been de-identified using the Safe Harbor standard, there was a record about a patient with a specific injury. The notes mentioned the profession of the patient's father and hinted at the location of his work. This particular profession lists its members publicly. It was therefore possible to identify all individuals within that profession in that region. Searches through social networking sites allowed the identification of a matching patient (having the same surname) with details of the specific injury during that specific period. The key pieces of information that made re-identification possible were the father's profession and region of work, and these are not part of the Safe Harbor items.

Therefore, universal de-identification heuristics that proscribe certain fields or prescribe specific generalizations of fields will not provide adequate protection in all situations and must be used with caution. Both the BioVU [15] and the i2b2 project [13] de-identify individual-level data according to the Safe Harbor standard, but also require a data sharing agreement with the data recipients as required by the Limited Dataset provision, and some sites implementing the i2b2 software use the Limited Dataset provision for de-identification [14].

Although the Limited Dataset provision provides a mechanism to disclose information without consent, it does not produce data that are de-identified. The challenge for data custodians is that the notices to patients for some repositories state that the data will be de-identified, so there is an obligation to perform de-identification before disclosure [15, 47]. Where patients are approached in advance for consent to include their data in the repository, this is predicated on the understanding that any disclosures will be of de-identified data [3]. Under these circumstances, a more stringent standard than the Limited Dataset is required. Within the framework of HIPAA, one can then use the statistical standard for de-identification. This is consistent with privacy legislation and regulations in other jurisdictions, which tend not to be prescriptive and allow a more context-dependant interpretation of identifiability [26].

Managing re-identification risk

The statistical standard in the HIPAA Privacy Rule provides a means to disclose more detailed information for research purposes and still manage overall re-identification risk. Statistical methods can provide quantitative guarantees to patients and research ethics boards that the probability of re-identification is low.

A risk-based approach has been in use for a few years for the disclosure of large clinical and administrative datasets [48], and can be similarly used for the disclosure of information from a TRIS. The basic principles of a risk-based approach for de-identification are that (a) a re-identification probability threshold should be set and (b) the data should be de-identified until the actual re-identification probability is below that threshold.

Because measurement is necessary for setting thresholds, the supplementary material (Additional file 1) consists of a detailed review of re-identification probability metrics for evaluating identity disclosure. Below is a description of how to set a threshold and an overview of de-identification methods that can be used.

Setting a threshold

There are two general approaches to setting a threshold: (a) based on precedent and (b) based on an assessment of the risks from the disclosure of data.

Precedents for thresholds

Historically, data custodians have used the 'cell size of five' rule to de-identify data [49–58]. In the context of a probability of re-identifying an individual, this is equivalent to a probability of 0.2. Some custodians use a cell size of 3 [59–62], which is equivalent to a probability of 0.33 of re-identifying a single individual. Such thresholds are suitable when the data recipient is trusted.

It has been estimated that the Safe Harbor standard results in 0.04% of the population being at high risk for re-identification [63, 64]. Another re-identification attack study evaluated the proportion of Safe Harbor compliant medical records that can be re-identified and found that only 0.01% can be correctly re-identified [65]. In practice, setting such low thresholds can also result in significant distortion to the data [35], and is arguably more suitable when data are being publicly disclosed.

Risk-based thresholds

With this approach, the re-identification probability threshold is determined based on factors characterizing the data recipient and the data [48]. These factors have been suggested and have been in use informally by data custodians to inform their disclosure decisions for at least the last decade and a half [46, 66], and they cover three dimensions [67], as follows.

First, mitigating controls: this is the set of security and privacy practices that the data recipient has in place. The practices used by custodians of large datasets and recommended by funding agencies and research ethics boards for managing sensitive health information have been reviewed elsewhere [68].

Second, invasion of privacy: this evaluates the extent to which a particular disclosure would be an invasion of privacy to the patients (a checklist is available in [67]). There are three considerations: (i) the sensitivity of the data: the greater the sensitivity of the data, the greater the invasion of privacy; (ii) the potential injury to patients from an inappropriate disclosure - the greater the potential for injury, the greater the invasion of privacy; and (iii) the appropriateness of consent for disclosing the data - the less appropriate the consent, the greater the potential invasion of privacy.

Third, motives and capacity: this considers the motives and the capacity of the data recipient to re-identify the data, considering issues such as conflicts of interest, the potential for financial gain from a re-identification, and whether the data recipient has the skills and the necessary resources to re-identify the data (a checklist is available in [67]).

For example, if the mitigating controls are low, which means that the data recipient has poor security and privacy practices, then the re-identification threshold should be set at a lower level. This will result in more de-identification being applied. However, if the data recipient has very good security and privacy practices in place, then the threshold can be set higher.

De-identification methods

The i2b2 project tools allow investigators to query for patients and controls that meet specific inclusion/exclusion criteria [13, 69]. This allows the investigator to determine the size of cohorts for a study. The queries return counts of unique patients that match the criteria. If few patients match the criteria, however, there is a high probability of re-identification. To protect against such identity disclosure, the query engine performs several functions. First, random noise from a Gaussian distribution is added to returned counts, and the standard deviation of the distribution is increased as true counts approach zero. Second, an audit trail is maintained and if users are running too many related queries they are blocked. Also, limits are imposed on multiple queries so that a user cannot compute the mean of the perturbed data.

The disclosure of individual-level data from a TRIS is also important, and various de-identification methods can be applied to such data. The de-identification methods that have the most acceptability among data recipients are masking, generalization, and suppression (see below). Other methods, such as the addition of random noise, distort the individual-level data in ways that are sometimes not intuitive and may result in incorrect results if these distortions affect the multivariate correlational structure in the data. This can be mitigated if the specific type of analysis that will be performed is known in advance and the distortions can account for that. Nevertheless, they tend to have low acceptance among health researchers and analysts [5], and certain types of random noise perturbation can be filtered out to recover the original data [70]; therefore, the effectiveness of noise addition can be questioned. Furthermore, perturbing the DNA sequences themselves may obscure relationships or even lead to false associations [71].

Methods that have been applied in practice are described below and are summarized in Table 1.

Table 1 Summary of de-identification methods for individual-level data

Masking

Masking refers to a set of manipulations of the directly identifying information in the data. In general, direct identifiers are removed/redacted from the dataset, replaced with random values, or replaced with a unique key (also called pseudonymization) [72]. This latter approach is used in the BioVU project to mask the medical record number using a hash function [15].

Patient names are usually redacted or replaced with false names selected randomly from name lists [73]. Numbers, such as medical record numbers, social security numbers, and telephone numbers, are either redacted or replaced with randomly generated but valid numbers [74]. Locations, such as the names of facilities, would also normally be redacted. Such data manipulations are relatively simple to perform for structured data. Text de-identification tools will also do this, such as the tool used in the BioVU project [15].

Generalization

Generalization reduces the precision in the data. As a simple example of increasing generalization, a patient's date of birth can be generalized to a month and year of birth, to a year of birth, or to a 5 year interval. Allowable generalizations can be specified a priori in the form of a generalization hierarchy, as in the age example above. Generalizations have been defined for SNP sequences [75] and clinical datasets [68]. Instead of hierarchies, generalizations can also be constructed empirically by combining or clustering sequences [76] and transactional data [77] into more general groups.

When a dataset is generalized the re-identification probability can be measured afterwards. Records that are considered high risk are then flagged for suppression. When there are many variables the number of possible ways that these variables can be generalized can be large. Generalization algorithms are therefore used to find the best method of generalization. The algorithms are often constrained by a value MaxSup, which is the maximum percentage of records in the dataset that can be suppressed. For example, if MaxSup is set to 5%, then the generalization algorithm will ignore all possible generalizations that will result in more than 5% of the records being flagged for suppression. This will also guarantee that no more than 5% of the records will have any suppression in them.

Generalization is an optimization problem whereby the algorithm tries to find the optimal generalization for each of the quasi-identifiers that will ensure that the probability of re-identification is at or below the required threshold, the percentage of records flagged for suppression is below MaxSup, and information loss is minimized.

Information loss is used to measure the amount of distortion to the data. A simple measure of information loss is how high up the hierarchy the chosen generalization level is. However, this creates difficulties of interpretation, and other more theoretically grounded metrics that take into account the difference in the level of precision between the original dataset and the generalized data have been suggested [5].

Suppression

Usually suppression is applied to the specific records that are flagged for suppression. Suppression means the removal of values from the data. There are three general approaches to suppression: casewise deletion, quasi-identifier removal, and local cell suppression.

Casewise deletion removes the whole patient or visit record from the dataset. This results in the most distortion to the data because the sensitive variables are also removed even though those do not contribute to an increase in the risk of identity disclosure.

Quasi-identifier removal removes only the values about the quasi-identifiers in the dataset. This has the advantage that all of the sensitive information is retained.

Local cell suppression is an improvement over quasi-identifier removal in that fewer values are suppressed. Local cell suppression applies an optimization algorithm to find the least number of values about the quasi-identifiers to suppress [78]. All of the sensitive variables are retained and in practice considerably fewer of the quasi-identifier values are suppressed than in casewise and quasi-identifier deletion.

Available tools

Recent reports have provided summaries of free and supported commercial tools for the de-identification of structured clinical and administrative datasets [79, 80]. Also, various text de-identification tools have recently been reviewed [81], although many of these tools are experimental and may not all be readily available. Tools for the de-identification of genomic data are mostly at the research stage and their general availability and level of support is unknown.

Conclusions

Genomic research is increasingly using clinically relevant data from electronic health records. Research ethics boards will often require patient consent when their information is used for secondary purposes, unless that information is de-identified. I have described above the methods and challenges of de-identifying data when disclosed for such research.

Combined genomic and clinical data can be quite complex, with free form textual or structured representations, as well as clinical data that are cross-sectional or longitudinal, and relational or transactional. I have described current de-identification practices in two genomic research projects, i2b2 and BioVU, as well as more recent best practices for managing the risk of re-identification.

It is easiest to use prescriptive de-identification heuristics such as those in the HIPAA Privacy Rule Safe Harbor standard. However, such a standard provides insufficient protection for the complex datasets referred to here and may result in the disclosure of data with a high probability of re-identification. Even when augmented with data sharing agreements, these agreements may be based on the inaccurate assumption that the data have a low probability of re-identification. Furthermore, notices to patients and consent forms often state that the data will be de-identified when disclosed. Disclosure practices that are based on the actual measurement of the probability of re-identification allow data custodians to better manage their legal obligations and commitments to patients.

Moving forward, several areas will require further research to minimize risks of re-identification of data used for genomic research. For example, improved methods for the de-identification of genome sequences or genomic data are needed. Sequence de-identification methods that rely on generalization that have been proposed thus far will likely result in significant distortions to large datasets [82]. There is also evidence that the simple suppression of the sequence for specific genes can be undone relatively accurately [83]. In addition, the re-identification risks to family members have not been considered here. Although various re-identification attacks have been highlighted [84–86], adequate familial de-identification methods have yet to be developed.

Box 1

The 18 elements in the HIPAA Privacy Rule Safe Harbor standard that must be excluded/removed from a dataset

The following identifiers of the individual or of relatives, employers, or household members of the individual, are removed:

  1. 1.

    Names;

  2. 2.

    All geographic subdivisions smaller than a State, including street address, city, county, precinct, zip code, and their equivalent geocodes, except for the initial three digits of a zip code if, according to the current publicly available data from the Bureau of the Census:

  3. a)

    The geographic unit formed by combining all zip codes with the same three initial digits contains more than 20,000 people; and

  4. b)

    The initial three digits of a zip code for all such geographic units containing 20,000 or fewer people is changed to 000.

  5. 3.

    All elements of dates (except year) for dates directly related to an individual, including birth date, admission date, discharge date, date of death; and all ages over 89 and all elements of dates (including year) indicative of such age, except that such ages and elements may be aggregated into a single category of age 90 or older;

  6. 4.

    Telephone numbers;

  7. 5.

    Fax numbers;

  8. 6.

    Electronic mail addresses;

  9. 7.

    Social security numbers;

  10. 8.

    Medical record numbers;

  11. 9.

    Health plan beneficiary numbers;

  12. 10.

    Account numbers;

  13. 11.

    Certificate/license numbers;

  14. 12.

    Vehicle identifiers and serial numbers, including license plate numbers;

  15. 13.

    Device identifiers and serial numbers;

  16. 14.

    Web Universal Resource Locators (URLs);

  17. 15.

    Internet Protocol (IP) address numbers;

  18. 16.

    Biometric identifiers, including finger and voice prints;

  19. 17.

    Full face photographic images and any comparable images; and

  20. 18.

    Any other unique identifying number, characteristic, or code.

Adapted from [87]

Abbreviations

EHR:

electronic health record

HIPAA:

Health Insurance Portability and Accountability Act

SNP:

single nucleotide polymorphism

TRIS:

translational research information system.

References

  1. Prokosch H, Ganslandt T: Perspectives for medical informatics. Reusing the electronic medical record for clinical research. Methods Inf Med. 2009, 48: 38-44.

    PubMed  CAS  Google Scholar 

  2. Tannen R, Weiner M, Xie D: Use of primary care electronic medical record database in drug efficacy research on cardiovascular outcomes: Comparison of database and randomized controlled trial findings. BMJ. 2009, 338: b81-10.1136/bmj.b81.

    Article  PubMed  Google Scholar 

  3. McCarty C, Chisholm R, Chute C, Kullo I, Jarvik G, Larson E, Li R, Masys D, Ritchie M, Roden D, Struewing JP, Wolf WA: The eMERGE Network: a consortium of biorepositories linked to electronic medical records data for conducting genomic studies. BMC Med Genomics. 2011, 4: 13-10.1186/1755-8794-4-13.

    Article  PubMed Central  PubMed  Google Scholar 

  4. Ness R: Influence of the HIPAA privacy rule on health research. JAMA. 2007, 298: 2164-2170. 10.1001/jama.298.18.2164.

    Article  PubMed  CAS  Google Scholar 

  5. El Emam K, Dankar F, Issa R, Jonker E, Amyot D, Cogo E, Corriveau J-P, Walker M, Chowdhury S, Vaillancourt R, Roffey T, Bottomley J: A globally optimal k-anonymity method for the de-identification of health data. J Am Med Inform Assoc. 2009, 16: 670-682. 10.1197/jamia.M3144.

    Article  PubMed Central  PubMed  Google Scholar 

  6. Kho M, Duffett M, Willison D, Cook D, Brouwers M: Written informed consent and selection bias in observational studies using medical records: systematic review. BMJ. 2009, 338: b866-10.1136/bmj.b866.

    Article  PubMed Central  PubMed  Google Scholar 

  7. El Emam K, Jonker E, Fineberg A: The case for deidentifying personal health information. Social Sciences Research Network. 2011, [http://papers.ssrn.com/abstract=1744038]

    Google Scholar 

  8. Harris ALAR, Teschke KE: Personal privacy and public health: potential impacts of privacy legislation on health research in Canada. Can J Public Health. 2008, 99: 293-296.

    PubMed  Google Scholar 

  9. Kosseim P, Brady M: Policy by procrastination: secondary use of electronic health records for health research purposes. McGill J Law Health. 2008, 2: 5-45.

    Google Scholar 

  10. Lowrance W: Learning from experience: privacy and the secondary use of data in health research. J Health Serv Res Policy. 2003, 8 (Suppl 1): 2-7.

    Article  PubMed  Google Scholar 

  11. Panel on Research Ethics: Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans. 2010, [http://www.pre.ethics.gc.ca/pdf/eng/tcps2/TCPS_2_FINAL_Web.pdf]2

    Google Scholar 

  12. Willison D, Emerson C, Szala-Meneok K, Gibson E, Schwartz L, Weisbaum K: Access to medical records for research purposes: varying perceptions across Research Ethics Boards. J Med Ethics. 2008, 34: 308-314. 10.1136/jme.2006.020032.

    Article  PubMed  CAS  Google Scholar 

  13. Murphy S, Weber G, Mendis M, Gainer V, Chueh H, Churchill S, Kohane I: Serving the enterprise and beyond with informatics for integrating biology and the bedside. J Am Med Inform Assoc. 2010, 17: 124-130. 10.1136/jamia.2009.000893.

    Article  PubMed Central  PubMed  Google Scholar 

  14. Deshmukh V, Meystre S, Mitchell J: Evaluating the informatics for integrating biology and the bedside system for clinical research. BMC Med Res Methodol. 2009, 9: 70-10.1186/1471-2288-9-70.

    Article  PubMed Central  PubMed  Google Scholar 

  15. Roden D, Pulley J, Basford M, Bernard G, Clayton E, Balser J, Masys D: Development of a large-scale de-identified DNA biobank to enable personalized medicine. Clin Pharmacol Ther. 2008, 84: 362-369. 10.1038/clpt.2008.89.

    Article  PubMed Central  PubMed  CAS  Google Scholar 

  16. Malin B, Karp D, Scheuermann R: Technical and policy approaches to balancing patient privacy and data sharing in clinical and translational research. J Investig Med. 2010, 58: 11-18.

    PubMed Central  PubMed  Google Scholar 

  17. The Supreme Court of the State of Illinois: Southern Illinoisan vs. The Illinois Department of Public Health. Docket No. 98712. 2006, [http://www.state.il.us/court/opinions/supremecourt/2006/february/opinions/html/98712.htm]

    Google Scholar 

  18. Hansell S: AOL removes search data on group of web users. 2006, New York Times, [http://www.nytimes.com/2006/08/08/business/media/08aol.html]

    Google Scholar 

  19. Barbaro M, Zeller T: A face is exposed for AOL searcher No. 4417749. 2006, New York Times, [http://www.nytimes.com/2006/08/09/technology/09aol.html]

    Google Scholar 

  20. Zeller T: AOL moves to increase privacy on search queries. 2006, New York Times, [http://www.nytimes.com/2006/08/22/technology/22aol.html]

    Google Scholar 

  21. Ochoa S, Rasmussen J, Robson C, Salib M: Reidentification of individuals in Chicago's homicide database: A technical and legal study. 2001, Archived at http://www.webcitation.org/5xyAv7j6M, [http://groups.csail.mit.edu/mac/classes/6.805/student-papers/spring01-papers/reidentification.doc]

    Google Scholar 

  22. Narayanan A, Shmatikov V: Robust de-anonymization of large sparse datasets. Proceedings of the 2008 IEEE Symposium on Security and Privacy. 2008, 111-125. [http://doi.ieeecomputersociety.org/10.1109/SP.2008.33]

    Chapter  Google Scholar 

  23. Sweeney L: Computational disclosure control: A primer on data privacy protection. PhD thesis. 2001, Massachusetts Institute of Technology, Electrical Engineering and Computer Science department

    Google Scholar 

  24. Appellate Court of Illinois - Fifth District: The Southern Illinoisan v. Department of Public Health. 2004, [http://law.justia.com/cases/illinois/court-of-appeals-fifth-appellate-district/2004/5020836.html]

    Google Scholar 

  25. Federal Court (Canada): Mike Gordon vs. The Minister of Health: Affidavit of Bill Wilson. Court File No. T-347-06. 2006

    Google Scholar 

  26. El Emam K, Kosseim P: Privacy interests in prescription records, part 2: patient privacy. IEEE Security Privacy. 2009, 7: 75-78.

    Article  Google Scholar 

  27. Lowrance W, Collins F: Ethics. Identifiability in genomic research. Science. 2007, 317: 600-602. 10.1126/science.1147699.

    Article  PubMed  CAS  Google Scholar 

  28. Malin B, Sweeney L: Determining the identifiability of DNA database entries. Proc AMIA Symp. 2000, 2000: 537-541.

    Google Scholar 

  29. Wjst M: Caught you: threats to confidentiality due to the public release of large-scale genetic data sets. BMC Med Ethics. 2010, 11: 21-10.1186/1472-6939-11-21.

    Article  PubMed Central  PubMed  Google Scholar 

  30. Uzuner O, Luo Y, Szolovits P: Evaluating the state-of-the-art in automatic de-identification. J Am Med Inform Assoc. 2007, 14: 550-563. 10.1197/jamia.M2444.

    Article  PubMed Central  PubMed  Google Scholar 

  31. El Emam K: Data anonymization practices in clinical research: a descriptive study. 2006, Health Canada, Access to Information and Privacy Division, [http://www.ehealthinformation.ca/documents/HealthCanadaAnonymizationReport.pdf]

    Google Scholar 

  32. Canadian Medical Association (CMA) Holdings Incorporated: Deidentification/Anonymization Policy. 2009, Ottawa CMA Holdings

    Google Scholar 

  33. UBC Clinical Research Ethics Board, Providence Health Care Research Ethics Board: Interim Guidance to Clinical Researchers Regarding Compliance with the US Health Insurance Portability and Accountability Act (HIPAA). 2003, Vancouver: University of British Columbia

    Google Scholar 

  34. Hryanszkiewicz I, Norton M, Vickers A, Altman D: Preparing raw clinical data for publications: tuidance for journal editors, authors, and peer reviewers. BMJ. 2010, 340: c181-10.1136/bmj.c181.

    Article  Google Scholar 

  35. Clause S, Triller D, Bornhorst C, Hamilton R, Cosler L: Conforming to HIPAA regulations and compilation of research data. Am J Health Syst Pharm. 2004, 61: 1025-1031.

    PubMed  Google Scholar 

  36. Lin Z, Owen A, Altman R: Genomic research and human subject privacy. Science. 2004, 305: 183-10.1126/science.1095019.

    Article  PubMed  CAS  Google Scholar 

  37. Homer N, Szelinger S, Redman M, Duggan D, Tembe W, Muehling J, Pearson J, Stephan D, Nelson S, Craig D: Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS Genet. 2008, 4: e1000167-10.1371/journal.pgen.1000167.

    Article  PubMed Central  PubMed  Google Scholar 

  38. Jacobs K, Yeager M, Wacholder S, Craig D, Kraft P, Hunter D, Paschal J, Manolio T, Tucker M, Hoover R, Thomas GD, Chanock SJ, Chatterjee N: A new statistic and its power to infer membership in a genome-wide association study using genotype frequencies. Nat Genet. 2009, 41: 1253-1257. 10.1038/ng.455.

    Article  PubMed Central  PubMed  CAS  Google Scholar 

  39. Loukides G, Denny J, Malin B: The disclosure of diagnosis codes can breach research participants' privacy. J Am Med Inform Assoc. 2010, 17: 322-327.

    Article  PubMed Central  PubMed  Google Scholar 

  40. Willenborg L, de Waal T: Statistical Disclosure Control in Practice. 1996, New York: Springer-Verlag

    Book  Google Scholar 

  41. Willenborg L, de Waal T: Elements of Statistical Disclosure Control. 2001, New York: Springer-Verlag

    Book  Google Scholar 

  42. Skinner CJ: On identification disclosure and prediction disclosure for microdata. Statistica Neerlandica. 1992, 46: 21-32. 10.1111/j.1467-9574.1992.tb01324.x.

    Article  Google Scholar 

  43. Marsh C, Skinner C, Arber S, Penhale B, Openshaw S, Hobcraft J, Lievesley D, Walford N: The case for samples of anonymized records from the 1991 census. J R Stat Soc A (Statistics in Society). 1991, 154: 305-340. 10.2307/2983043.

    Article  CAS  Google Scholar 

  44. Dale A, Elliot M: Proposals for 2001 samples of anonymized records: an assessment of disclosure risk. J R Stat Soc A (Statistics in Society). 2001, 164: 427-447. 10.1111/1467-985X.00212.

    Article  Google Scholar 

  45. Flora Felso JT, Wagner GG: Disclosure limitation methods in use: results of a survey. Confidentiality, Disclosure and Data Access: Theory and Practical Applications for Statistical Agencies. Edited by: Doyle P, Lane J, Theeuwes J, Zayatz L. 2003, Washington, DC: Elsevier, 1: 17-38.

    Google Scholar 

  46. Jabine T: Statistical disclosure limitation practices of United States statistical agencies. J Official Stat. 1993, 9: 427-454.

    Google Scholar 

  47. Pulley J, Brace M, Bernard G, Masys D: Evaluation of the effectiveness of posters to provide information to patients about a DNA database and their opportunity to opt out. Cell Tissue Banking. 2007, 8: 233-241. 10.1007/s10561-007-9033-4.

    Article  PubMed  Google Scholar 

  48. El Emam K: Risk-based de-identification of health data. IEEE Security Privacy. 2010, 8: 64-67.

    Article  Google Scholar 

  49. Subcommittee on Disclosure Limitation Methodology - Federal Committee on Statistical Methodology: Working paper 22: Report on statistical disclosure control. 1994, Statistical Policy Office, Office of Information and Regulatory Affairs, Office of Management and Budget, [http://www.fcsm.gov/working-papers/wp22.html]

    Google Scholar 

  50. Manitoba Center for Health Policy: Manitoba Center for Health Policy Privacy code. 2002, [http://umanitoba.ca/faculties/medicine/units/mchp/media_room/media/MCHP_privacy_code.pdf]

    Google Scholar 

  51. Cancer Care Ontario: Cancer Care Ontario Data Use and Disclosure Policy. 2005,Updated 2008. [http://www.cancercare.on.ca/common/pages/UserFile.aspx?fileId=13234]

  52. Health Quality Council: Security and Confidentiality Policies and Procedures. 2004, Saskatoon Health Quality Council

    Google Scholar 

  53. Health Quality Council: Privacy code. 2004, Saskatoon Health Quality Council

    Google Scholar 

  54. Statistics Canada: Therapeutic abortion survey. 2007, Archived at http://www.webcitation.org/5VkcHLeQw, [http://www.statcan.ca/cgi-bin/imdb/p2SV.pl?Function=getSurvey&SDDS=3209&lang=en&db=IMDB&dbg=f&adm=8&dis=2#b9]

    Google Scholar 

  55. Office of the Information and Privacy Commissioner of British Columbia: Order No. 261-1998. 1998, [http://www.oipc.bc.ca/orders/1998/Order261.html]

  56. Office of the Information and Privacy Commissioner of Ontario: Order P-644. 1994, Archived at http://www.webcitation.org/5inrVJyQp, [http://www.ipc.on.ca/images/Findings/Attached_PDF/P-644.pdf]

    Google Scholar 

  57. Alexander L, Jabine T: Access to social security microdata files for research and statistical purposes. Social Security Bulletin. 1978, 41: 3-17.

    PubMed  CAS  Google Scholar 

  58. Ministry of Health and Long Term care (Ontario): Corporate Policy 3-1-21. 1984, [Available on request]

    Google Scholar 

  59. Duncan G, Jabine T, de Wolf S: Private Lives and Public Policies: Confidentiality and Accessibility of Government Statistics. 1993, Washington DC: National Academies Press

    Google Scholar 

  60. de Waal A, Willenborg L: A view on statistical disclosure control for microdata. Survey Methodol. 1996, 22: 95-103.

    Google Scholar 

  61. Office of the Privacy Commissioner of Quebec (CAI): Chenard v. Ministere de l'agriculture, des pecheries et de l'alimentation (141). CAI 141. 1997, [Available on request]

    Google Scholar 

  62. National Center for Education Statistics: NCES Statistical Standards. 2003, Washington DC US Department of Education

    Google Scholar 

  63. National Committee on Vital and Health Statistics: Report to the Secretary of the US Department of Health and Human Services on Enhanced Protections for Uses of Health Data: A Stewardship Framework for "Secondary Uses" of Electronically Collected and Transmitted Health Data. V.101907(15). 2007

    Google Scholar 

  64. Sweeney L: Data sharing under HIPAA: 12 years later. 2010, Workshop on the HIPAA Privacy Rule's De-Identification Standard, [http://www.hhshipaaprivacy.com/]

    Google Scholar 

  65. Lafky D: The Safe Harbor method of de-identification: an empirical test. 2010, Fourth National HIPAA Summit West, Archived at http://www.webcitation.org/5xA2HIOmj, [http://www.ehcca.com/presentations/HIPAAWest4/lafky_2.pdf]

    Google Scholar 

  66. Jabine T: Procedures for restricted data access. J Official Stat. 1993, 9: 537-589.

    Google Scholar 

  67. El Emam K, Brown A, AbdelMalik P, Neisa A, Walker M, Bottomley J, Roffey T: A method for managing re-identification risk from small geographic areas in Canada. BMC Med Inform Decis Mak. 2010, 10: 18-10.1186/1472-6947-10-18.

    Article  PubMed Central  PubMed  Google Scholar 

  68. El Emam K, Dankar F, Vaillancourt R, Roffey T, Lysyk M: Evaluating patient re-identification risk from hospital prescription records. Can J Hospital Pharmacy. 2009, 62: 307-319.

    Google Scholar 

  69. Murphy S, Chueh H: A security architecture for query tools used to access large biomedical databases. Proc AMIA Symp. 2002, 552-556. [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2244204/pdf/procamiasymp00001-0593.pdf]

    Google Scholar 

  70. Kargupta H, Datta S, Wang Q, Sivakumar K: Random data perturbation techniques and privacy preserving data mining. Knowledge Information Systems. 2005, 7: 387-414. 10.1007/s10115-004-0173-6.

    Article  Google Scholar 

  71. Malin B, Cassa C, Kantarcioglu M: A survey of challenges and solutions for privacy in clinical genomics data mining. Privacy-Preserving Knowledge Discovery. Edited by: Bonchi F, Ferrari E. 2011, New York: Chapman & Hall/CRC Press

    Google Scholar 

  72. El Emam K, Fineberg A: An overview of techniques for de-identifying personal health information. 2009, Access to Information and Privacy Division of Health Canada, [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1456490]

    Google Scholar 

  73. Tu K, Klein-Geltink J, Mitiku T, Mihai C, Martin J: De-identification of primary care electronic medical records free-text data in Ontario, Canada. BMC Med Inform Decis Mak. 2010, 10: 35-10.1186/1472-6947-10-35.

    Article  PubMed Central  PubMed  Google Scholar 

  74. El Emam K, Jonker E, Sams S, Neri E, Neisa A, Gao T, Chowdhury S: Pan-Canadian de-identification guidelines for personal health information. 2007, Privacy Commissioner of Canada, [http://www.ehealthinformation.ca/documents/OPCReportv11.pdf]

    Google Scholar 

  75. Lin Z, Hewett M, Altman R: Using binning to maintain confidentiality of medical data. Proc AMIA Symp. 2002, 454-458. [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2244360/pdf/procamiasymp00001-0495.pdf]

    Google Scholar 

  76. Malin B: Protecting genomic sequence anonymity with generalization lattices. Methods Inf Med. 2005, 44: 687-692.

    PubMed  CAS  Google Scholar 

  77. Loukides G, Gkoulalas-Divanis A, Malin B: Anonymization of electronic medical records for validating genome-wide association studies. Proc Natl Acad Sci USA. 2010, 107: 7898-7903. 10.1073/pnas.0911686107.

    Article  PubMed Central  PubMed  CAS  Google Scholar 

  78. Aggarwal G, Feder T, Kenthapadi K, Motwani R, Panigrahy R, Thomas D, Zhu A: Anonymizing tables. Proceedings of the 10th International Conference on Database Theory (ICDT05). 2005, Springer, 246-258.

    Google Scholar 

  79. Fraser R, Willison D: Tools for De-Identification of Personal Health Information. 2009, Canada Health Infoway, Archived at http://www.webcitation.org/5xA2KBoMm, [http://www2.infoway-inforoute.ca/Documents/Tools_for_De-identification_EN_FINAL.pdf]

    Google Scholar 

  80. Health System Use Technical Advisory Committee - Data De-Identification Working Group: 'Best Practice' Guidelines for Managing the Disclosure of De-Identified Health Information. 2011, Archived at http://www.webcitation.org/5x9w6635d, [http://www.ehealthinformation.ca/documents/Data%20De-identification%20Best%20Practice%20Guidelines.pdf]

    Google Scholar 

  81. Meystre S, Friedlin F, South B, Shen S, Samore M: Automatic de-identification of textual documents in the electronic health record: a review of recent research. BMC Med Res Methodol. 2010, 10: 70-10.1186/1471-2288-10-70.

    Article  PubMed Central  PubMed  Google Scholar 

  82. Aggarwal C: On k-anonymity and the curse of dimensionality. Proceedings of the 31st International Conference on Very Large Data Bases. 2005, VLDB Endowment, 901-909.

    Google Scholar 

  83. Nyhold D, Yu C, Visscher P: On Jim Watson's APOE status: genetic information is hard to hide. Eur J Hum Genet. 2008, 17: 147-149.

    Article  Google Scholar 

  84. Malin B: Re-identification of familial database records. Proc AMIA Symp. 2006, 524-528. [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1839550/pdf/AMIA2006_0524.pdf]

    Google Scholar 

  85. Cassa C, Schmidt B, Kohane I, Mandl K: My sister's keeper ? Genomic research and the identifiability of siblings. BMC Med Genomics. 2008, 1: 32-10.1186/1755-8794-1-32.

    Article  PubMed Central  PubMed  Google Scholar 

  86. Bieber F, Brenner C, Lazer D: Finding criminals through DNA of their relatives. Science. 2006, 312: 1315-1316. 10.1126/science.1122655.

    Article  PubMed  CAS  Google Scholar 

  87. Pabrai U: Getting Started with HIPAA. 2003, Boston: Premier Press

    Google Scholar 

Download references

Acknowledgements

The analyses performed on the California state inpatient database and the birth registry of Ontario were part of studies approved by the research ethics board of the Children's Hospital of Eastern Ontario Research Institute. Bradley Malin (Vanderbilt University) reviewed some parts of the draft manuscript, and Elizabeth Jonker (CHEO Research Institute) assisted with the formatting of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Khaled El Emam.

Additional information

Competing interests

The authors declare that they have no competing interests.

Electronic supplementary material

13073_2011_239_MOESM1_ESM.PDF

Additional file 1: Measuring the probability of re-identification. This file describes metrics and decision rules for measuring and interpreting the probability of re-identification for identity disclosure. (PDF 400 KB)

Authors’ original submitted files for images

Rights and permissions

Reprints and permissions

About this article

Cite this article

El Emam, K. Methods for the de-identification of electronic health records for genomic research. Genome Med 3, 25 (2011). https://doi.org/10.1186/gm239

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/gm239

Keywords