Concept
Privacy-preserving machine learning models for privacy sensitive industries.
Longer Description
We believe that privacy-preserving models of ML will be increasingly important as we expand into the last corners of industry that are extremely privacy sensitive - such as healthcare, and security. As we collect more and more data off of wearables, EHR reports, and hospital devices, using ML in places where HIPAA must also be maintained is key.
Other thoughts on GTM/Founder Profiles etc
- We’ve looked at many companies in this space but struggled with how providing federated learning integrates with a fuller longer term stack of ML tools, as we think that FL with some other privacy preserving techniques like differential privacy and homomorphic encryption will realistically be the best solution.
- We think forward deployed engineers, in the style of Palantir, represent an interesting GTM in the face of sectors that are often slightly tech adverse and could use help discovering where FL would be best employed.
- The traditional statistical assumptions made with many ML models (ie that the data is independently and identically distributed) aren’t always ideal for federated learning. So, how we account for or apply this to the ideal use cases is something we’re still thinking about. (This piece by DataFleets (a privacy preserving data engine) gives a great illustrative example of nonIID data if you’re not familiar). Edgify, for example, has proposed federated curvature. This adds a penalty term to the loss function, compelling all local models to converge to a shared optimum.
Comparable Companies