Intro
Speed to Insight. Lawfully & Ethically.

Intro

Forewords

The following forewords – appearing in the First Edition (January 2018 – Pre GDPR), Second Edition (January 2019 – Post GDPR) and Third Edition (June 2020 – 2 Years After GDPR) of this blueprint for harmonising data use and protection under the GDPR (this “Blueprint”) – highlight:

  • The importance of accountability-based information policy management.
  • The need for new technologies like Anonos’ first of its-kind patented BigPrivacy solution to support proportional use of data, responsive to the variety and complexity of different data uses.
  • The minimum bar necessary to support unlocking data value while respecting the rights of individuals is the use of proven techniques and processes like Anonos BigPrivacy.

Data Protection Megatrends

Martin Abrams

By Martin Abrams, Executive Director and Chief Strategist,
The Information Accountability Foundation (IAF) https://informationaccountability.org/

There are two data protection megatrends going on today. The first is the breaking wave of transformational data processing laws, regulations, and guidance evolving around the globe, epitomized by the GDPR. The second is the evolution of a data trust deficit into a full-fledged legitimacy conundrum. Yet people expect all the value of a highly observational world. How do global organisations reconcile the growing importance of data analytics, artificial intelligence, and machine learning with the increasingly complex and multi-jurisdictional regulations on lawful data use? And furthermore, how do they maintain trust that is based on both value and protection?

The Information Accountability Foundation believes accountability-based information policy management – being a trusted data steward – is a key element of the answer.

The GDPR requires accountability specifically. It requires organisations to have policies, and the processes to put those policies into effect. Those processes rest on new technologies that are demonstrable to assure conditions set by policy actually are actionable. The GDPR introduces these new controls in the form of technical and organisational measures necessary to support data protection by design and by default. Comprehensive data protection impact assessments that balance the interests of all stakeholders are a part of organisational controls.

Pseudonymisation, as newly defined under the GDPR, is another methodology that enables the fine grained, risk-managed, use-case-specific controls necessary to support data protection by design and by default, particularly the fundamental data protection law principle of data minimisation. Data protection by design and by default embodies the goal of making technology controls that support appropriate uses.

A central core of data protection accountability and ethics is the will and ability to demonstrate that you can, in fact, keep your promises. Technologies that enforce data protection by design and by default show data subjects that in addition to coming up with new ways to derive value from data, organisations are pursuing equally innovative technical approaches to protecting data privacy – an especially sensitive and topical issue given the epidemic of data security breaches around the globe.

Vibrant and growing areas of economic activity – the “trust economy,” life sciences research, personalized medicine/education, the Internet of Things, personalization of goods and services – are based on individuals trusting that their data is private, protected, and used only for appropriate purposes that bring them and society maximum value. This trust cannot be maintained using outdated approaches to data protection. We must embrace new approaches like data protection by design and by default to earn and maintain trust and more effectively serve businesses, researchers, healthcare providers, and anyone who relies on the integrity of data.

Traditional approaches to data processing often involve the use of static identifiers that enable the ability to infer – or single out or link to – a data subject. This is because static identifiers, when used across multiple data sets, enable the overlay of the data sets so data that is not identifiable by itself, when combined with other overlapping data, leads to reidentification of a data subject. Conversely, data protection by design and by default can leverage dynamically changing identifiers to probabilistically prevent the ability to infer identifying information pertaining to a data subject across multiple data sets or data combinations – all in a manner that is capable of supporting mathematic analysis, audit, and enforcement.

New technologies are being introduced to implement data protection by design and by default. Anonos’ first-of-its-kind patented BigPrivacy solution is one example that supports proportional use of data in a manner that is responsive to the variety and complexity of different potential uses of data. Specifically, BigPrivacy can reveal different levels and types of information to the same and/or different parties at different times, for different purposes, at different places – and with respect to each, only as necessary for each use of data. By ensuring that only the minimum information necessary for each appropriate purpose is processed by “dialing-up” or “dialing-down” the linkability (or identifiability) of data, BigPrivacy helps to support accountable, ethical, fair, and legal data use.

Ethical Tools for Controlling Disclosure

Jules Polonetsky

By Jules Polonetsky, Chief Executive Officer.
The Future of Privacy Forum (FPF) https://fpf.org/

Writing in The New Yorker about the work of sociologist Beryl Bellman, Malcolm Gladwell said, “A secret isn’t invalidated by its disclosure, it’s defined by its disclosure. What makes a secret a secret is simply the operating instructions that accompany its movement from one person to the next.”

Today’s world is awash in secrets captured and disclosed by data-driven products and services. With all the personal information collected by wearables, smart homes, social media, smart cars, and innumerable other data-centric offerings, few companies are truly promising individuals’ privacy. Rather they are committing to responsible use of the data and controlled disclosure. The massive volume, variety and velocity of data created and captured by the ever-increasing numbers of data driven offerings highlights the need for technical tools that enable those personal information commitments.

Traditionally, de-identification has been a primary method for enabling access to and use of data while protecting individuals’ privacy. De-identification has even sometimes been viewed as a “silver bullet” enabling organisations to reap the benefits of data processing while avoiding operational risks and legal requirements. However, scientists have repeatedly demonstrated that purportedly de-identified data sets can be vulnerable to reidentification attacks, thereby casting doubt on the extent to which de-identification is a credible method for using and deriving value from data while protecting privacy. Compounding the uncertainty is the fact that reidentification risks only increase as computing technologies become ever faster and data-centric products and services generate increasingly more data for linkage and analysis.

Thus, weak or unproven promises of de-identification are no longer acceptable to regulators around the world.

Proven techniques and processes like Anonos BigPrivacy are the minimum bar called for to support unlocking the value of data while respecting the rights of individuals. While no “silver bullet,” if implemented correctly dynamically applied de-identification can provide the technical operating instructions for both effective legal compliance and an operating system for respecting the information shared by individuals.