Technology
Speed to Insight. Lawfully & Ethically.

Technology

Technology

Anonos decentralised data protection technology enables lawful repurposing of data while preserving 100% of the source data value.

Anonos allows organisations to maximise data utility and expand their opportunities to ethically process, share, combine and enrich data in compliance with data privacy and data protection regulations.

This section will first cover the foundations of Anonos BigPrivacy from a logical and technical perspective, an approach called functional separation.

We will then address some of the shortcomings of the lawful basis of consent when used by itself under the GDPR, and why technological solutions are needed to handle non-consent bases for processing data lawfully.

Next, we look at GDPR-defined Pseudonymisation as a new approach to dealing with the protection of personal data. We examine the implementation of Pseudonymisation as an outcome in the BigPrivacy solution and compare it to traditional anonymisation approaches.

Finally, we explain how Variant Twins leverage Anonos patented Controlled Linkable Data20 and k anonymity risk testing and conclude with a practical guide to using Anonos BigPrivacy to create Variant Twins, and the use of the Lawful Insights API™.

What is Functional Separation?

Meeting The Challenges of Big Data – A Call For Transparency, User Control, Data Protection By Design And Accountability

A report by the European Data Protection Supervisor (“EDPS”) – “Meeting The Challenges of Big Data – A Call For Transparency, User Control, Data Protection By Design And Accountability” highlighted functional separation as a potential solution for helping to resolve conflicts between innovative data use and data protection.21

The principle of functional separation involves using technical and organisational safeguards to separate information value from identity to enable the discovery of trends and correlations independent from applying the insights gained to the data subjects concerned. The EDPS noted at the time of publication of this report that:

 

“There is little evidence of experience with effective implementation of functional separation outside some specialist organisations such as national statistical offices and research institutions. In order to take full advantage of secondary uses of data, it is essential that other organisations develop their expertise and offer comparable guarantees against misuse of data.”22

Anonos’ eight years of legal and technical research developed the expertise necessary to “offer comparable guarantees against misuse of data” as suggested by the EDPS by leveraging functional separation principles to deliver “Speed To Insight, Lawful & Ethically.”

Under the GDPR, the concept of functional separation is embodied in the definitional requirements necessary to achieve and maintain Pseudonymisation as newly defined under Article 4(5). This requires that the information value of data must be separated from the identity of a data subject such that additional securely stored information is necessary to relink information value to identity, and then only for authorised processing under controlled conditions.

The principle of functional separation also exists under other data protection laws using different terms – e.g. heightened “De-Identification” under the California Consumer Privacy Act (CCPA) and the proposed Indian Data Privacy Law, and “Anonymisation” under the Brazilian Data Protection Law.

The CCPA introduces the principle of functional separation through its definition of “Personal Information”, which is subject to various protections under the Act. Personal Information includes “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.”

The CCPA’s extensive list of data comprising protected Personal Information includes “static” and even “probabilistic” tokens (replacement identifiers) used to replace personal information if “more probable than not” that the information could be used to identify a consumer or device.

While restrictions under the CCPA do not apply to “De-identified Data,” traditional approaches to de identification do not satisfy the heightened requirements for De-identification under the CCPA. CCPA heightened De-identification requirements are not satisfied using “static” and “probabilistic” tokens (replacement identifiers) because they fail to adequately separate information value from identity to prevent unauthorised reidentification of consumers.

Pseudonymisation as a Way Forward

One of these new technical controls is Pseudonymisation, as newly defined under the GDPR. The ENISA publication Recommendations on Shaping Technology According to GDPR Provisions: An Overview on Data Pseudonymisation24 highlights the following benefits from GDPR-compliant pseudonymisation:

  • Pseudonymisation serves as a vehicle to “relax” certain data controller obligations, including:
    • Lawful repurposing (further processing) in compliance with purpose limitation principles;
    • Archiving of data for statistical processing, public interest, scientific or historical research;
    • Reduced notification obligations in the event of a data breach.
  • Pseudonymisation supports a more favourable (broader) interpretation of data minimisation.
  • Pseudonymisation goes beyond protecting “real-world personal identities” by protecting indirect identifiers.
  • Pseudonymisation provides for unlinkability between data and personal identity, furthering the fundamental data protection principles of necessity and data minimisation.
  • Pseudonymisation decouples privacy and accuracy, enabling Data Protection by Design and by Default while at the same time allowing data about individuals to remain more accurate.

While Pseudonymisation has many benefits, using it effectively requires significant expertise. The same ENISA report recognises that effective Pseudonymisation is highly context-dependent and “requires a high level of competence” to prevent attacks while maintaining data utility.

Introduction to Anonos BigPrivacy

BigPrivacy’s unique approach:

  • Provides a powerful tool for organisations to implement GDPR-mandated Data Protection by Design and by Default.
  • Supports a risk-based approach to data protection. It delivers the flexibility and control to provide purpose, context, required utility, and desired scalability of intended processing, while addressing the necessary protection of personal data. This is accomplished by empowering privacy engineers to apply finely tuned combinations of anonymisation techniques, GDPR-compliant Pseudonymisation (and CCPA-compliant heightened de-identification), together with patented risk based controls.
  • Ensures transparency and auditability of privacy-engineering techniques and offers the visibility of the security and data protection levels used to achieve desired accountability.
  • Implements ENISA recommendations and best practices for Pseudonymisation.
  • Introduces multiple new innovations that advance the state-of-the-art to address new challenges presented by Big Data.

The core of Anonos' capabilities is centered around Variant Twins®.

Variant Twins are our patented approach to controlled selective disclosure. With Variant Twins, you only provide the level of identifiability needed for each authorised process. Because all Variant Twins are derived from, rather than by permanently altering, the original source data, 100% of the value of source data is retained.

To understand how Variant Twins work, let’s look at how they (i) leverage and (ii) compare to other privacy enhancing techniques.

Traditional Anonymisation

First, let's consider traditional anonymisation techniques. Anonymisation attempts to remove data from the jurisdiction of data privacy laws. However, if the original data were to be made truly anonymous against all potential risk of reidentification, the anonymised data would lose most of its utility and value.

Figure 1:

Shortcomings of Anonymisation

Traditional anonymisation solutions attempt to preserve some level of utility by managing the increased risk of reidentification by restricting processing to enclaves or silos. We refer to this as “Centralised Processing.”

Using this approach means that the data is not available for high value uses such as sharing, combining and enriching because when information is used outside of the centralised processing environment, the risk of unauthorised reidentification via the Mosaic Effect becomes too high. The Mosaic Effect occurs when a person is indirectly identifiable via linkage attacks because the “anonymised” source data can be combined with other pieces of information, enabling the individual to be distinguished from others. More details on this particular shortcoming of anonymisation are available at www.MosaicEffect.com.

Anonymisation techniques, by definition, cannot enable authorised relinking; they also degrade the accuracy of data and expose parties in the data supply chain to potential liability.

Figure 2:

Choose Data Protection for Your Data Strategy

Techniques that protect data for low risk centralised processing do not scale well, because they become ineffective in decentralised high value environments like advanced analytics, data sharing, combining and enriching.

Figure 2 above highlights the shortcoming of traditional anonymisation approaches. What works in a centralised environment (depicted by the small boat in a bathtub) simply does not support the “out in the open ocean”, high value, decentralised processing necessary for global digitisation.

Traditional anonymisation approaches break down – they fail to protect data – when used for decentralised processing. This is because of the real risk of unauthorised reidentification, which often results in the surveillance of individuals for both commercial and illegal ends.

Figure 3:

GDPR Pseudonymisation Improves Upon Anonymisation

Next let’s take a look at what GDPR Pseudonymisation requires and how it improves upon anonymisation.

What is Pseudonymisation Under GDPR?

The first thing to note is that GDPR-compliant Pseudonymisation is an outcome, not a technique. This means that old, pre-GDPR approaches (which are too often still incorrectly referred to as “pseudonymisation”) will rarely, if ever, meet the GDPR definition of what Pseudonymisation actually requires.

Prior to the GDPR, pseudonymisation was widely understood to mean replacing direct identifiers with tokens. It was a privacy-enhancing technique. However, Article 4(5) of the GDPR introduces a new legal definition of Pseudonymisation, where it is defined as an outcome.25

In order to satisfy the new requirements for GDPR-compliant Pseudonymisation, you must separate information value from individual identity so that the only way to re-identify an individual is by accessing data that is held separately by the data controller.

The second thing to note is that pre-GDPR, pseudonymisation was thought of as a technique applied to individual fields within a data set. The new GDPR definition, in combination with the GDPR definition for Personal Data, results in Pseudonymisation being an outcome for the data set as a whole (the entire collection of direct identifiers, indirect identifiers and other attributes).

A third observation can be made as a consequence of the massive proliferation of data publicly available for free, privately available for sale, and on the dark web as a result of ongoing daily data breaches globally. It can be best summarised in this quote by Professor Paul Ohm26:

“These results suggest that maybe everything is PII to one who has access to the right outside information.”27 (emphasis added)

Taken together, the implication is clear: in order to achieve GDPR-compliant Pseudonymisation you have to protect not only direct identifiers, but also indirect identifiers. You must also consider the degree of protection to be applied to all other attributes in a data set while still preserving its utility for the intended use of the data. Anonos technology does this.

Additional information is available at www.Pseudonymisation.com.

Figure 4:

GDPR Pseudonymisation is an Outcome, Not a Technique

So, what does that mean?

It means in nearly all cases, organisations and businesses need new technology to implement Pseudonymisation because it is no longer a technique, but an outcome. It requires that the only way you can get back and forth over the wall shown at the top left of Figure 4 above (between “information value” and “identity”) is via access to additional information that is kept separately by the data controller.

If you can re-link data without access to this separately held additional information, it is not GDPR-compliant Pseudonymisation. It also means that the data was not successfully anonymised.

The extent and specificity of the technical requirements necessary for achieving GDPR-compliant Pseudonymisation are significantly underappreciated. ENISA has outlined over 50 requirements for implementing GDPR Pseudonymisation. One of them is tokenisation – but that's essentially all that any vendor other than Anonos does.

Additional information is available at www.ENISAguidelines.com.28

Figure 5:

GDPR Pseudonymisation Counters Mosaic Effect

Figure 5 above, which provides a summary of information available at www.MosaicEffect.com, shows an example of what people mean when referring to pseudonymisation in its pre-GDPR form as a technique. Here a username is replaced with a token in the form of a User ID, but the same token is used repeatedly for each occurrence of the same User ID. This is called static (or persistent) tokenisation.

So, what do you need in order to satisfy the GDPR outcome requirement for Pseudonymisation? You must have dynamism in the way that you allocate and change tokens.

Figure 5 shows that in each place that the static token 7abc1a23 was previously used, when applying dynamism, it is replaced with a different pseudonym each time. This means that the only way to get to the identity of the individual represented by the User ID 7abc1a23 is by accessing separately kept “additional information.”

The Advantages of Controlled Linkable Data

Figure 6:

Benefits of Anonos Patented Innovations

Having looked at anonymisation and GDPR-compliant Pseudonymisation, let's now look at Anonos’ patented innovations that go beyond ENISA requirements to enable organisations to “take full advantage of secondary uses of data” by delivering “comparable guarantees against misuse of data” by leveraging functional separation as suggested by the EDPS.29

Anonos enables you to do more than what GDPR-compliant Pseudonymisation requires. Anonos enables the reversal of pseudonymous tokens and authorised reidentification, both of which are powerful benefits of GDPR-compliant Pseudonymisation.

However, Anonos goes further by enabling a data controller (for authorised purposes and under controlled conditions) to relink to any or all values from the source data. This is done using Anonos patented Controlled Linkable Data30, which represents a significant advance over GDPR compliant Pseudonymisation.

As noted above, pseudonymisation when practiced in the pre-GDPR era as a technique has been one-dimensional: static tokens applied to direct identifiers, where a specific identifier is assigned the same token consistently both within and between databases. ENISA refers to this as a “pseudonymisation policy” and refers to these static tokens as being fully deterministic, meaning that a consistent input will result in a deterministic or consistent token. While useful as a localised security technique, this approach provides limited protection against unauthorised reidentification because it is so vulnerable to linkage attacks and inference attacks.

An additional pseudonymisation policy described by ENISA in their guidance at the other end of the spectrum is fully randomised pseudonymisation: a specific identifier receives a different pseudonym every time it occurs. This maximises protection but significantly reduces data utility.

Anonos Controlled Relinkable Data provides improvements over newly defined Pseudonymization under the GDPR that advance the state-of-the-art in at least three important ways:

  • Granular control over relinking to source data by using record-level identifiers, unlike mere reversal of pseudonymous tokens associated with traditional techniques. This makes it possible to retrieve additional information from the source data beyond the data included in an original Variant Twin.
  • Powerful resistance to reidentification by adversaries attempting linkage and inference attacks, while preserving much higher levels of analytical utility than previously obtainable. This is done by enabling efficient application of deterministic pseudonymisation to individual indirect identifiers, or combinations of them, and not just to direct identifiers.
  • Support for not only ENISA-defined fully-randomised and fully deterministic pseudonymisation policies, but also for three additional intermediate pseudonymisation policies. Drawing on the ENISA nomenclature, these can be characterised as field, table, and document deterministic pseudonymisation respectively.

    - Field Deterministic Pseudonymisation: consistency is maintained only within individual columns in a table, with different pseudonyms being used between columns containing the same data (e.g. country of origin, and country of residence within a single table) as well as columns in other tables/databases. This is the default for BigPrivacy. This ensures that each occurrence of the same data value in a field is replaced by the same pseudonym within that column. The same data values in other columns would be replaced by a different set of pseudonyms.
    - Table Deterministic Pseudonymisation: consistency is maintained within a table. Multiple (or all) fields within single table or data set have pseudonym values that are deterministic within that table/data set, but different pseudonyms are used in each succeeding table/data set.
    - Document Deterministic Pseudonymisation: all occurrences of a data value within all fields in all tables in one database are assigned the same pseudonym. New pseudonyms are used for occurrences of that data value in each succeeding database.

BigPrivacy supports this range of pseudonymisation policies by leveraging three different ENISA recommended cryptography techniques as pseudonymisation functions, each of which are characterised by ENISA as providing strong data protection:

  • Cryptographic Pseudo-Random Number Generation (CRNG): BigPrivacy leverages the computer operating system entropy pool to create Replacement Dynamic De-Identifiers (R DDIDs®) for individual field values for fully randomised pseudonymisation.
  • Hashed Message Authentication Code (HMAC): HMAC is used to create non-reversible (but still re-linkable) Association Dynamic De-Identifiers (A-DDIDs) for four of the deterministic pseudonymisation policies mentioned above field, table, document, and fully deterministic).
  • Symmetric AES Encryption: BigPrivacy uses symmetric AES encryption to fully randomise R DDIDs used as record level pseudonyms and for two types of A-DDIDs: reversible deterministic and reversable fully randomised. These are useful in circumstances where creating a master index (mapping table) is not desired and/or where pseudonym reversal without relinking is useful (e.g., interpreting results from Machine Learning and AI models created using Variant Twins).

For the first two techniques, a recovery function is provided via a securely and separately stored mapping table. This enables reversal of Pseudonymisation when authorised. The mapping table is the “additional information” kept separately subject to technical and organisational measures to ensure that the personal data are not attributable to an identified or identifiable natural person under the GDPR definition of Pseudonymisation. For the third technique a mapping table is not required due to the inherent reversibility of symmetric encryption, with the keys necessary for decryption being the “information held separately”.

K-Anonymity and BigPrivacy Variant Twins

As shown in Figure 7below, the final process step in creating Variant Twins involves leveraging k anonymity to enable Data Use Risk Management. This provides protection against reidentification attacks using singling out.

Figure 7:

Protect Data In Use, for Speed to Insight, Lawfully

The following overview of k-anonymity is derived from a description provided by the U.S. Department of Health & Human Services (HHS).31

When using the k-anonymity technique, “k” refers to the number of people to which each disclosed record might correspond. In practice, this correspondence is assessed using the features that could be reasonably applied by a recipient to identify an individual data subject.

Table 2 below illustrates an application of generalisation and suppression methods to achieve a k anonymity value of “2” (2-anonymity) with respect to the Age, Gender, and ZIP Code columns in the fictitious protected health information included in Table 1 below. All rows correspond to fictitious patient records with the same combination of generalised and suppressed values for Age, Gender, and ZIP Code. Notice that Gender has been suppressed completely.

Protected Health Information (PHI) and K-Anonymity Level of '2'

The combination of:

  • Anonymisation techniques;
  • Pseudonymisation/heightened de-identification techniques; and
  • Anonos patented Controlled Linkable Data risk-based controls;

used for a particular field in a dataset is what we call a “Privacy Action.”™

The combination of different Privacy Actions used for a data set, together with the selected level of k anonymity, comprise what we call a “Privacy Transformer.”™ When a source data set is run through a Privacy Transformer, the result is a Variant Twin. A Variant Twin is a version of the source data transformed by the selected Privacy Actions and filtered for reidentification risk to suppress records that do not meet the required k-anonymity threshold.

This combination of Privacy Actions and reidentification risk management provides tailored protection against:

  • Unauthorised combining of data with other data sources; which can result in
  • Unauthorised Re-identification of data subjects; while
  • Preserving full data utility that enables compliant secondary uses of data for analytics, AI, and marketing.

The flexibility of this approach enables a privacy engineer to create Variant Twins for different contexts, uses, and risks. This flexibility opens a range of levels of risk-based data protection, from “local protection” for use within a locally controlled enclave or siloed environment to “global protection”, enabling lawful and ethical decentralised data sharing, combining and enrichment.

In summary, Variant Twins:

  • Deliver resistance to reidentification of truly anonymous data without forcing a data controller to defend the difficult status of “Anonymous” data under the GDPR, by delivering GDPR compliant Pseudonymised data instead.32
  • Enable data controllers to enforce control over the re-linkability of data.
  • Preserve 100% of the utility of source data.
  • Protect data in use.
  • Activate express statutory benefits.
  • Enable processing under the lawful basis of Legitimate Interests.

Anonos enables organisations to accelerate speed to insight, lawfully and ethically for innovative uses of data.

Enabling Digital Insights with Variant Twins

Figure 8 below shows a simplified example of data elements that Variant Twins can include.

Figure 8: Digital Twins

Digital Twins

The left-hand side shows a Digital Twin of “John J Jeffries” – a digital representation of a specific person.33 This includes direct identifiers like name and location, as well as indirect identifiers like date of birth, zip code, income and loan details. In this example, this Digital Twin is the original source data.

By selecting different Privacy Actions, different versions of Variant Twins can be created to support different use cases and purposes. For example, you might choose to reveal a town or city instead of an exact address or a binned age, income or loan range instead of identifying amounts. This decision would be based on the risks associated with the desired processing of the data.

Variant Twin A has had only limited protections applied, primarily generalisation, along with a format preserving pseudonym for the ID number. Variant Twin B on the other hand has been much more aggressively transformed, with almost all fields pseudonymised. Note that age, title, location and rating are likely represented by deterministic pseudonyms and so this Variant Twin still retains significant analytic value.

Centralised approaches to data protection approach privacy and data utility as irreconcilable objectives. Anonos decentralised data protection simultaneously enables both goals by enforcing use case specific risk-based protections embedded in Variant Twins.

Variant Twins enable privacy engineers to achieve the benefits of the first goal of data protection while also achieving the second goal of maximising data utility. This allows organisations to have it both ways i.e. enabling them to Have Their Cake and Eat it Too.

We will now walk through how to create Anonos Variant Twins to enable Speed To Insight, Lawfully & Ethically.

Figure 9: Transform Clear Text into Variant Twin

Transform Clear Text into Variant Twin

Figure 9 above illustrates how Anonos Technology works to transform clear text data into a Variant Twin.

Upon ingestion, an R-DDID (Random pseudonym) is created for each record and pre-pended to it. Then the R-DDID and source data are written to a master index to allow for later use in relinking when authorised. The R-DDID itself is only a pointer to the original record, and contains no information value itself, so it poses no risk when it is kept with the Variant Twin data because it is only re-linkable by an authorised individual with access to the “additional information” held separately.

Figure 10:

Privacy Actions - Privacy Transformer - Variant Twin

The values indicated at the bottom of Figure 10 above in columns A, D, E, F, G, and H indicate data columns that are a part of the Variant Twin.

The indirect identifiers included in this Variant Twin are D and E – “gender” and “age” whereas F, G, and H – representing “income,” “total debt,” and “loan payment score” – are attributes. The direct identifiers (sometimes referred to using the legacy term PII) in columns B and C – “acct_id” and “name” – are omitted from the Variant Twin entirely.

By using field deterministic pseudonyms to represent indirect identifiers such as age ranges and gender, data analysts can process data without knowledge of the actual values, thus allowing the analysis to be more privacy-respectful and less identifying but without sacrificing utility.

This also reduces the risk of conscious or subconscious bias since data analysts cannot see the values underlying the pseudonyms and thus cannot make assumptions about the data subjects.

Figure 11: Controlled Relinking

Controlled Relinking

Having ingested raw data and created a Variant Twin, we will now walk through controlled relinking (represented by “C” and “D” in Figure 11 above).

Figure 12:

Variant Twin Relinking from Source Data

In Figure 12 above, we show how a data controller is able to relink from a Variant Twin back to the source data when authorised.

At the top of the figure you can see the Variant Twin, and at the bottom you can see the relinked data from the original raw data set. The row replacement Pseudonyms (R-DDIDs) at the top enable you to relink to any or all of the original data values whether or not those values are included in the Variant Twin. This is because each R-DDID serves as a pointer to the entirety of the associated original record via the master index. This patented capability enables multiple layers of abstraction (and privacy) while enabling a data controller to traverse between the layers to gain access to any original source data value for authorised processing. In this way, 100% of the utility of the source data is preserved.

Figure 13: External Sharing

External Sharing

Next, let's look at how a group called a “Microsegment” (or “mSeg”) can be created using Variant Twins to enable privacy-respectful data sharing (“E” in Figure 13 above).

Figure 14:

Record-Level Variant Twin - mSeg Variant Twin

A useful way to think of mSegs is as look-alike audiences that are small enough to represent the distinct characteristics, attributes, preferences, activities, behaviours and even location of a real group of data subjects (all of which may be necessary to achieve business objectives from processing data), but large enough that they don’t enable singling out, linking to, or inferences about the identity of individual data subjects.

Each record in a Variant Twin represents a single individual. By comparison, each record in the mSeg represents a small group of individuals that share the same characteristics.

In the example in Figure 14 above, we are using two gender and seven age range field deterministic pseudonyms (A-DDID’s) to create microsegments (mSegs). Aggregating the original Variant Twin records by each of the combinations of the two gender pseudonyms and the seven age range pseudonyms results in fourteen mSegs.

This approach can be extended to as many segmentation variables as the data set sample size will support to satisfy the particular mSeg requirements for a specific use case.

Figure 15: Internal Enrichment

Internal Enrichment

Figure 16 below illustrates how mSegs can be used to enable privacy-respectful data sharing between and among data stewards for an improved customised experience for customers while protecting their privacy (represented by “F” in Figure 15 above).

Figure 16:

mSeg Enabled Data Sharing for Enrichment

mSeg defining fields in a source data set are transformed to deterministic pseudonyms (A-DDID’s) that correspond to identical A-DDID’s in an mSeg Variant Twin that has been prepared for sharing so that it can be used for enrichment. Pseudonym-to-pseudonym matching is then used to enrich the source data set with attribute data in the mSeg Variant Twin.

Lawful Insights API

Lawful Insights API is a special purpose application of Anonos technology designed for use in protecting data at the edge of a network – outbound by a sending party, or inbound by a receiving party. Utilising the same processes, techniques and technology as BigPrivacy, it leverages API endpoints to reduce friction and to streamline and accelerate the process of safely and securely sending and receiving data for sharing, combining and enrichment, delivering Speed To Insight, Lawfully and Ethically.

Lawful Insights API

When Lawful Insights API is used by a sending party in an “outbound” mode, they first create a Variant Twin for sharing as described above.

The receiving party is then authorised using their instance of Big Privacy to access an API endpoint at the sender’s instance that enables them to retrieve that specific Variant Twin. That Variant Twin is transmitted to them using standard https:// using TLS encryption, and is imported into their instance of BigPrivacy as what is known as a Shared Data set.

When Lawful Insights API is used in “inbound” mode, a receiving party first exposes an API endpoint in their instance of Lawful Insights API to a sending party allowing them to transmit via https:// a schema that describes data to be sent. The receiving party then configures a Privacy Transformer that will adequately protect to their requirements.

In this case, no master index will be created to ensure the receiving party can show that they are not in a position to reidentify data subjects.

The receiving party then notifies the sending party to use a second API endpoint to transmit data via https://. Upon receipt in memory the data is immediately transformed into Variant Twin form and stored as desired.

See the discussion of “Data Safe Haven #5: Expanded Data Use, Sharing & Combining” in the COMPLIANCE section below for more information about the significant benefits, including reductions in obligations and liabilities, for parties receiving data when using Anonos Lawful Insights API.