Summary: Consider two data providers, each maintaining private records of different feature sets about common entities. They aim to learn a linear model jointly in a federated setting, namely, data is local and a shared model is trained from locally computed updates. In contrast with most work on distributed learning, in this scenario (i) data is split vertically, i.e. by features, (ii) only one data provider knows the target variable and (iii) entities are not linked across the data providers.
We describe a three-party end-to-end solution in two phases: privacy-preserving entity resolution and federated logistic regression over messages encrypted with an additively homomorphic scheme. It is secure against an honest-but-curious adversary. The system allows learning without either exposing data in the clear or sharing which entities the data providers have in common. Our implementation is as accurate as a naive non-private solution that unifies data in one place, and scales to problems with millions of entities with hundreds of features.