I show that distribution learning under local differential privacy has deep connections to combinatorial objects known as balanced incomplete block designs, and in fact there is a one-to-one relationship between the set of optimal protocols and the set of balanced incomplete block designs.
Learning one quantile (such as the median) non-adaptively is as hard as learning the entire CDF, motivating our introduction of adaptive algorithms for this problem under local and shuffle differential privacy.
We often neglect to analyse our algorithms in the low-privacy regime, despite its relevance to the shuffle model, and its widespread use in practice. Our improved analysis of existing algorithms for locally private histograms show that they achieve optimal error in low-privacy regimes, and can be used to derive optimal high-privacy algorithms in the shuffle model of differential privacy.