Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Fast-MIA: Efficient and Scalable Membership Inf...

Sponsored · Your Podcast. Everywhere. Effortlessly. Share. Educate. Inspire. Entertain. You do you. We'll handle the rest.

Fast-MIA: Efficient and Scalable Membership Inference for LLMs

Avatar for Shotaro Ishihara

Shotaro Ishihara

May 12, 2026

More Decks by Shotaro Ishihara

Other Decks in Research

Transcript

  1. Fast-MIA: Efficient and Scalable Membership Inference for LLMs ACL 2026

    System Demonstrations (Hiromu Takahashi and Shotaro Ishihara) https://github.com/Nikkei/fast-mia uv run --with vllm python main.py --config config/sample.yaml 1. High-throughput batch inference using vLLM (about 5 times faster individually) 2. Cross-method caching architecture (Reduce the total processing time) LLM LOSS vLLM backend batch inference Shared Cache Reuse across methods PPL/zlib Min-K% Prob DC-PDD Lowercase PAC ReCaLL Con-ReCall SaMIA …… LLM Is this text included? Text Pre-training Data • Calculate the log-likelihood, etc. • Various methods have been proposed. 1. Growing computational demands for individual MIA methods. 2. Redundant computation across methods for benchmarking. AUC Reproducibility and Speed The cache is working Future directions (Contributions Welcome): • The number of implemented methods including dataset inference. • Model diversity such as encoder-only and diffusion language models. * Fast-MIA / Transformers-based * inference time (# inferences)