This book chapter is published open access.
The key to the success of differential privacy, now the gold standard for privacy-preserving data analysis, is the ability to quantify and reason about cumulative privacy loss over many differentially private interactions. When upper bounds on privacy loss are loose, the deployment of the algorithms is by definition conservative. Under high levels of composition, much potential utility is lost. We survey two general approaches to getting more utility: privacy amplification methods, which are algorithmic, and definitional methods, which admit a wider class of algorithms and lead to tighter analyses of existing algorithms.