Privacy Preservation, Michael Rossbory, SCCH
In our work package we are doing research on novel approaches to allow for statistical and computational analysis of sensitive or private data while assuring preservation of privacy of the data owner. In the first period of the project, we developed a novel approach of how to achieve a high level of privacy by adding noise to the data in an optimal way to tackle the issue of the privacy-accuracy trade-off.
In the second period we focused on novel approaches of distributed privacy-preserving transfer and multi-task learning. Using our optimal noise-adding mechanism to keep perturbation of data as small as possible, we developed a framework that ensures a high level of privacy without degrading learning performance, is capable of handling high-dimensional data and heterogeneity of domains and allows learning of the target domain model without requiring an access to source domain private training data.
In addition to the possibility of achieving privacy by adding noise to data we also addressed the problem of practical secure privacy-preserving distributed machine (deep) learning using fully homomorphic encryption.
Considering that private data is distributed, and the training data may contain directly or indirectly an information about private data, an architecture and a methodology are suggested for mitigating the impracticality issue of fully homomorphic encryption arising from large computational overhead via very fast gate-by-gate bootstrapping and introducing a learning scheme that requires homomorphic computation of only efficient-to-evaluate functions.