PII-Compass: Guiding LLM training data extraction prompts towards the target PII via grounding

KK Nakka, A Frikha, R Mendes, X Jiang… - arXiv preprint arXiv …, 2024 - arxiv.org
arXiv preprint arXiv:2407.02943, 2024arxiv.org
The latest and most impactful advances in large models stem from their increased size.
Unfortunately, this translates into an improved memorization capacity, raising data privacy
concerns. Specifically, it has been shown that models can output personal identifiable
information (PII) contained in their training data. However, reported PIII extraction
performance varies widely, and there is no consensus on the optimal methodology to
evaluate this risk, resulting in underestimating realistic adversaries. In this work, we …
The latest and most impactful advances in large models stem from their increased size. Unfortunately, this translates into an improved memorization capacity, raising data privacy concerns. Specifically, it has been shown that models can output personal identifiable information (PII) contained in their training data. However, reported PIII extraction performance varies widely, and there is no consensus on the optimal methodology to evaluate this risk, resulting in underestimating realistic adversaries. In this work, we empirically demonstrate that it is possible to improve the extractability of PII by over ten-fold by grounding the prefix of the manually constructed extraction prompt with in-domain data. Our approach, PII-Compass, achieves phone number extraction rates of 0.92%, 3.9%, and 6.86% with 1, 128, and 2308 queries, respectively, i.e., the phone number of 1 person in 15 is extractable.
arxiv.org