Skip to content

Privacy Enforcement

Intent

Apply patient consent and data use restrictions through FHIR security labels and consent resources, enabling granular privacy controls across healthcare data exchanges.

Forces

  • Granular Sharing & Legal Obligations: Healthcare data sharing must respect complex, context-dependent consent and privacy rules.
  • Auditability & Trust: Healthcare systems must maintain comprehensive audit trails for regulatory compliance.

Structure

The Privacy Enforcement pattern applies consent rules and security labels to filter and restrict FHIR data based on patient preferences and regulatory requirements.

Privacy Enforcement Architecture

Key Components

ConsentEngine

Evaluates patient consent resources against access requests

SecurityLabelService

Manages and applies security labels to resources

PolicyDecisionPoint

Makes access control decisions based on consent and policy

DataFilter

Filters response data based on privacy decisions

ObligationHandler

Applies and propagates data use obligations

Behavior

The following sequence shows how privacy rules are applied to a FHIR request:

Privacy Enforcement Sequence

Enforcement Steps

  1. Extract Context
  2. Load Consent
  3. Evaluate Rules
  4. Apply Labels
  5. Filter Data
  6. Attach Obligations

Implementation Considerations

Example FHIR Consent resource demonstrating provision-based access rules, security labels, and nested exceptions for granular privacy control.

Consent Resource Structure
{
  "resourceType": "Consent",
  "id": "consent-example-1",
  "status": "active",
  "scope": {
    "coding": [
      {
        "system": "http://terminology.hl7.org/CodeSystem/consentscope",
        "code": "patient-privacy",
        "display": "Privacy Consent"
      }
    ]
  },
  "category": [
    {
      "coding": [
        {
          "system": "http://loinc.org",
          "code": "59284-0",
          "display": "Consent Document"
        }
      ]
    }
  ],
  "patient": {
    "reference": "Patient/12345",
    "display": "Jane Doe"
  },
  "dateTime": "2024-01-15T10:30:00Z",
  "performer": [
    {
      "reference": "Patient/12345"
    }
  ],
  "organization": [
    {
      "reference": "Organization/acme-healthcare",
      "display": "ACME Healthcare"
    }
  ],
  "policyRule": {
    "coding": [
      {
        "system": "http://terminology.hl7.org/CodeSystem/v3-ActCode",
        "code": "OPTIN",
        "display": "Opt-in"
      }
    ]
  },
  "provision": {
    "type": "permit",
    "period": {
      "start": "2024-01-15",
      "end": "2025-01-15"
    },
    "actor": [
      {
        "role": {
          "coding": [
            {
              "system": "http://terminology.hl7.org/CodeSystem/v3-ParticipationType",
              "code": "PRCP",
              "display": "primary information recipient"
            }
          ]
        },
        "reference": {
          "reference": "Organization/research-institute",
          "display": "Research Institute"
        }
      }
    ],
    "action": [
      {
        "coding": [
          {
            "system": "http://terminology.hl7.org/CodeSystem/consentaction",
            "code": "access",
            "display": "Access"
          }
        ]
      },
      {
        "coding": [
          {
            "system": "http://terminology.hl7.org/CodeSystem/consentaction",
            "code": "use",
            "display": "Use"
          }
        ]
      }
    ],
    "securityLabel": [
      {
        "system": "http://terminology.hl7.org/CodeSystem/v3-Confidentiality",
        "code": "R",
        "display": "Restricted"
      }
    ],
    "purpose": [
      {
        "system": "http://terminology.hl7.org/CodeSystem/v3-ActReason",
        "code": "HRESCH",
        "display": "healthcare research"
      }
    ],
    "class": [
      {
        "system": "http://hl7.org/fhir/resource-types",
        "code": "Observation",
        "display": "Observation"
      },
      {
        "system": "http://hl7.org/fhir/resource-types",
        "code": "Condition",
        "display": "Condition"
      }
    ],
    "provision": [
      {
        "type": "deny",
        "class": [
          {
            "system": "http://hl7.org/fhir/resource-types",
            "code": "MedicationRequest",
            "display": "MedicationRequest"
          }
        ],
        "securityLabel": [
          {
            "system": "http://terminology.hl7.org/CodeSystem/v3-ActCode",
            "code": "PSY",
            "display": "psychiatry disorder information sensitivity"
          }
        ]
      },
      {
        "type": "deny",
        "class": [
          {
            "system": "http://hl7.org/fhir/resource-types",
            "code": "DiagnosticReport",
            "display": "DiagnosticReport"
          }
        ],
        "securityLabel": [
          {
            "system": "http://terminology.hl7.org/CodeSystem/v3-ActCode",
            "code": "HIV",
            "display": "HIV/AIDS information sensitivity"
          }
        ]
      }
    ]
  },
  "verification": [
    {
      "verified": true,
      "verifiedWith": {
        "reference": "Patient/12345"
      },
      "verificationDate": "2024-01-15"
    }
  ]
}

Security Label Evaluation

Evaluates security labels on FHIR resources against consent provisions and confidentiality hierarchies to make access control decisions.

Security Label Evaluation
from typing import Dict, List, Optional, Set, Any
from dataclasses import dataclass
from enum import Enum

class ConsentDecision(Enum):
    PERMIT = "permit"
    DENY = "deny"
    NO_DECISION = "no-decision"

class SecurityLabelCode(Enum):
    # Confidentiality codes
    UNRESTRICTED = "U"
    LOW = "L"
    MODERATE = "M"
    NORMAL = "N"
    RESTRICTED = "R"
    VERY_RESTRICTED = "V"

    # Sensitivity codes
    ETH = "ETH"       # Substance abuse
    PSY = "PSY"       # Psychiatry
    HIV = "HIV"       # HIV/AIDS
    STD = "STD"       # Sexually transmitted disease
    GDIS = "GDIS"     # Genetic disease
    SCA = "SCA"       # Sickle cell anemia

@dataclass
class SecurityLabel:
    """Represents a security label on a FHIR resource"""
    system: str
    code: str
    display: Optional[str] = None

    @classmethod
    def from_fhir(cls, coding: Dict) -> 'SecurityLabel':
        return cls(
            system=coding.get('system', ''),
            code=coding.get('code', ''),
            display=coding.get('display')
        )

@dataclass 
class AccessContext:
    """Context for access decision"""
    user_id: str
    user_roles: List[str]
    purpose_of_use: str
    requesting_organization: str
    patient_id: str

class SecurityLabelEvaluator:
    """
    Evaluates security labels against consent provisions
    to make access control decisions.
    """

    # Confidentiality hierarchy (higher = more restrictive)
    CONFIDENTIALITY_HIERARCHY = ['U', 'L', 'M', 'N', 'R', 'V']

    def __init__(self, consent_repository):
        self.consent_repo = consent_repository

    async def evaluate_access(self,
                             resource: Dict,
                             context: AccessContext) -> ConsentDecision:
        """
        Evaluate whether access to a resource should be permitted
        based on security labels and consent.
        """
        # Extract security labels from resource
        resource_labels = self._extract_labels(resource)

        # Get active consents for the patient
        consents = await self.consent_repo.find_active_consents(context.patient_id)

        if not consents:
            # No consent found - apply default policy
            return self._apply_default_policy(resource_labels)

        # Evaluate each consent
        for consent in consents:
            decision = self._evaluate_consent(consent, resource, resource_labels, context)
            if decision != ConsentDecision.NO_DECISION:
                return decision

        return ConsentDecision.NO_DECISION

    def _extract_labels(self, resource: Dict) -> List[SecurityLabel]:
        """Extract security labels from resource metadata."""
        labels = []

        meta = resource.get('meta', {})
        for security in meta.get('security', []):
            labels.append(SecurityLabel.from_fhir(security))

        return labels

    def _evaluate_consent(self,
                         consent: Dict,
                         resource: Dict,
                         labels: List[SecurityLabel],
                         context: AccessContext) -> ConsentDecision:
        """
        Evaluate a single consent against the resource and context.
        """
        provision = consent.get('provision', {})

        # Check if provision applies to this context
        if not self._provision_applies(provision, resource, context):
            return ConsentDecision.NO_DECISION

        # Get base decision
        provision_type = provision.get('type', 'permit')
        base_decision = (ConsentDecision.PERMIT 
                        if provision_type == 'permit' 
                        else ConsentDecision.DENY)

        # Check nested provisions (exceptions)
        for nested_provision in provision.get('provision', []):
            if self._nested_provision_applies(nested_provision, resource, labels, context):
                nested_type = nested_provision.get('type', 'deny')
                return (ConsentDecision.DENY 
                       if nested_type == 'deny' 
                       else ConsentDecision.PERMIT)

        return base_decision

    def _provision_applies(self,
                          provision: Dict,
                          resource: Dict,
                          context: AccessContext) -> bool:
        """Check if provision applies to current context."""
        # Check actor (who is requesting)
        actors = provision.get('actor', [])
        if actors:
            actor_refs = [a.get('reference', {}).get('reference', '') for a in actors]
            if not any(self._matches_actor(ref, context) for ref in actor_refs):
                return False

        # Check purpose
        purposes = provision.get('purpose', [])
        if purposes:
            purpose_codes = [p.get('code') for p in purposes]
            if context.purpose_of_use not in purpose_codes:
                return False

        # Check resource class
        classes = provision.get('class', [])
        if classes:
            class_codes = [c.get('code') for c in classes]
            if resource.get('resourceType') not in class_codes:
                return False

        return True

    def _nested_provision_applies(self,
                                  provision: Dict,
                                  resource: Dict,
                                  labels: List[SecurityLabel],
                                  context: AccessContext) -> bool:
        """Check if nested provision (exception) applies."""
        # Check resource class
        classes = provision.get('class', [])
        if classes:
            class_codes = [c.get('code') for c in classes]
            if resource.get('resourceType') not in class_codes:
                return False

        # Check security labels
        provision_labels = provision.get('securityLabel', [])
        if provision_labels:
            provision_label_codes = {l.get('code') for l in provision_labels}
            resource_label_codes = {l.code for l in labels}

            # Check if any provision label matches resource labels
            if not provision_label_codes.intersection(resource_label_codes):
                return False

        return True

    def _matches_actor(self, actor_ref: str, context: AccessContext) -> bool:
        """Check if actor reference matches context."""
        if f"Organization/{context.requesting_organization}" in actor_ref:
            return True
        if f"Practitioner/{context.user_id}" in actor_ref:
            return True
        return False

    def _apply_default_policy(self, labels: List[SecurityLabel]) -> ConsentDecision:
        """Apply default policy when no consent exists."""
        # Default: permit access to non-sensitive data
        sensitive_codes = {'ETH', 'PSY', 'HIV', 'STD', 'GDIS', 'SCA'}

        for label in labels:
            if label.code in sensitive_codes:
                return ConsentDecision.DENY

        return ConsentDecision.PERMIT

    def compare_confidentiality(self, label1: str, label2: str) -> int:
        """
        Compare two confidentiality codes.
        Returns: -1 if label1 < label2, 0 if equal, 1 if label1 > label2
        """
        try:
            idx1 = self.CONFIDENTIALITY_HIERARCHY.index(label1)
            idx2 = self.CONFIDENTIALITY_HIERARCHY.index(label2)

            if idx1 < idx2:
                return -1
            elif idx1 > idx2:
                return 1
            else:
                return 0
        except ValueError:
            return 0

    def get_minimum_clearance(self, labels: List[SecurityLabel]) -> str:
        """
        Determine minimum clearance level needed for resource.
        Returns the highest confidentiality code from labels.
        """
        max_level = 'U'  # Start with unrestricted

        for label in labels:
            if label.system.endswith('/v3-Confidentiality'):
                if self.compare_confidentiality(label.code, max_level) > 0:
                    max_level = label.code

        return max_level

    def user_has_clearance(self, 
                          user_clearance: str,
                          required_clearance: str) -> bool:
        """Check if user clearance meets required level."""
        return self.compare_confidentiality(user_clearance, required_clearance) >= 0


# Example usage
async def example_evaluation():
    evaluator = SecurityLabelEvaluator(consent_repository=None)

    # Sample resource with security labels
    resource = {
        "resourceType": "Observation",
        "id": "lab-result-123",
        "meta": {
            "security": [
                {
                    "system": "http://terminology.hl7.org/CodeSystem/v3-Confidentiality",
                    "code": "R",
                    "display": "Restricted"
                },
                {
                    "system": "http://terminology.hl7.org/CodeSystem/v3-ActCode",
                    "code": "HIV",
                    "display": "HIV/AIDS information sensitivity"
                }
            ]
        }
    }

    # Access context
    context = AccessContext(
        user_id="dr-smith",
        user_roles=["Practitioner"],
        purpose_of_use="TREAT",  # Treatment
        requesting_organization="hospital-main",
        patient_id="12345"
    )

    # Would evaluate access
    # decision = await evaluator.evaluate_access(resource, context)
    # print(f"Access decision: {decision.value}")

Data Filtering

Filtering approaches for different scenarios: - Resource-level: Exclude entire resources based on consent - Element-level: Redact specific elements within resources - Bundle-level: Filter resources from search results - Reference masking: Hide references to restricted resources

  • Security Strategy: Security Strategy provides identity context needed for consent evaluation
  • Broker: Broker applies Privacy Enforcement rules during request routing
  • Audit & Provenance Chain: Privacy decisions are logged via Audit & Provenance Chain
  • De-Identification Adapter: De-Identification Adapter may be triggered based on privacy rules for secondary use

Benefits

  • Granular Control: Support for complex, context-dependent consent rules
  • Standards-Based: Built on FHIR Consent and Security Labels
  • Transparent: Clear audit trail of privacy decisions
  • Flexible: Supports various consent models and regulatory frameworks
  • Portable: Obligations travel with data for downstream enforcement

Trade-offs

  • Performance: Consent evaluation adds processing overhead
  • Complexity: Complex consent rules can be difficult to manage
  • Completeness: Requires comprehensive consent capture workflows
  • Maintenance: Consent rules must be kept current with regulations

References


Default Policies

Implement sensible default policies for scenarios where explicit consent is not available. Consider opt-out vs opt-in defaults based on jurisdiction requirements.