The 2023-2024 edition of the LLM Privacy Project focused on differential privacy, disclosure risk metrics, and practical applications in AI-driven environments.
The project was divided into three key phases:
We analyzed inconsistencies in the definitions of differential privacy and its parameters. By reviewing 25 papers, we categorized definitions into four themes: noise injection, bounding information, hiding individuals, and miscellaneous. We identified fundamental challenges in differential privacy and its implementation.
We conducted five experiments to explore the impact of differential privacy on data privacy and utility. The experiments covered the relationship between pre-processing and post-processing techniques, k-anonymity integration, data utility, statistical comparisons, and the feasibility of comparing differential privacy with k-anonymity.
This phase explored the legal alignment of differential privacy with existing privacy laws such as PIPEDA and Bill C-27. We provided guidance on how differential privacy can be used within anonymization frameworks and recommended clearer definitions and best practices for policymakers.
You can download the final report for the 2023-2024 edition here:
Principal Investigator: Rafal Kulik, PhD, Professor of Mathematics and Statistics at the University of Ottawa.
Co-investigator: Teresa Scassa, PhD, Canada Research Chair in Information Law and Policy.
Privacy Analytics provided advisory support and coordinated access to relevant data and computing infrastructure.