Database Anonymization. David Sánchez
Чтение книги онлайн.

Читать онлайн книгу Database Anonymization - David Sánchez страница 2

СКАЧАТЬ data to improve scientific research and decision making. However, when published data refer to individual respondents, disclosure risk limitation techniques must be implemented to anonymize the data and guarantee by design the fundamental right to privacy of the subjects the data refer to. Disclosure risk limitation has a long record in the statistical and computer science research communities, who have developed a variety of privacy-preserving solutions for data releases. This Synthesis Lecture provides a comprehensive overview of the fundamentals of privacy in data releases focusing on the computer science perspective. Specifically, we detail the privacy models, anonymization methods, and utility and risk metrics that have been proposed so far in the literature. Besides, as a more advanced topic, we identify and discuss in detail connections between several privacy models (i.e., how to accumulate the privacy guarantees they offer to achieve more robust protection and when such guarantees are equivalent or complementary); we also explore the links between anonymization methods and privacy models (how anonymization methods can be used to enforce privacy models and thereby offer ex ante privacy guarantees). These latter topics are relevant to researchers and advanced practitioners, who will gain a deeper understanding on the available data anonymization solutions and the privacy guarantees they can offer.

       KEYWORDS

      data releases, privacy protection, anonymization, privacy models, statistical disclosure limitation, statistical disclosure control, microaggregation

      A tots aquells que estimem, tant si són amb nosaltres com si perviuen en el nostre record.

      To all our loved ones, whether they are with us or stay alive in our memories.

       Contents

       Preface

       Acknowledgments

       1 Introduction

       2 Privacy in Data Releases

       2.1 Types of Data Releases

       2.2 Microdata Sets

       2.3 Formalizing Privacy

       2.4 Disclosure Risk in Microdata Sets

       2.5 Microdata Anonymization

       2.6 Measuring Information Loss

       2.7 Trading Off Information Loss and Disclosure Risk

       2.8 Summary

       3 Anonymization Methods for Microdata

       3.1 Non-perturbative Masking Methods

       3.2 Perturbative Masking Methods

       3.3 Synthetic Data Generation

       3.4 Summary

       4 Quantifying Disclosure Risk: Record Linkage

       4.1 Threshold-based Record Linkage

       4.2 Rule-based Record Linkage

       4.3 Probabilistic Record Linkage

       4.4 Summary

       5 The k-Anonymity Privacy Model

       5.1 Insufficiency of Data De-identification

       5.2 The k-Anonymity Model

       5.3 Generalization and Suppression Based k-Anonymity

       5.4 Microaggregation-based k-Anonymity

       5.5 Probabilistic k-Anonymity

       5.6 Summary

       6 Beyond k-Anonymity: l-Diversity and t-Closeness

       6.1 l-Diversity

       6.2 t-Closeness

       6.3 Summary

       7 t-Closeness Through Microaggregation

       7.1 Standard Microaggregation and Merging

       7.2 t-Closeness Aware Microaggregation: k-anonymity-first

       7.3 t-Closeness Aware Microaggregation: t-closeness-first

       7.4 Summary

       8 Differential Privacy

       8.1 Definition

       8.2 Calibration to the Global Sensitivity

       8.3 Calibration to the Smooth Sensitivity

       8.4 The Exponential Mechanism

       8.5 Relation to k-anonymity-based Models

       8.6 Differentially Private Data Publishing

       8.7 Summary

       9 Differential Privacy by Multivariate Microaggregation

       9.1 Reducing Sensitivity Via Prior СКАЧАТЬ