Addressing Privacy Challenges Associated with Artificial Intelligence Development

1


 

What are the key privacy challenges associated with artificial intelligence (AI) development and why should investors care?

While investments are pouring into AI, we’re already seeing challenges around usage and expansion due to the world’s concerns around how data is gathered and used to train AI models. In terms of users, we have seen enterprises self-restricting their use of AI due to intellectual property and privacy concerns, which will reduce the value they extract from AI initiatives. Meanwhile, we have seen developers repeatedly being blocked from entering entire markets.

This is the rub: AI risks abound regarding IP protection, privacy and security, as well as data usage. Any of these could be showstoppers for AI providers, users, and their investors. As such, the investor community must care about this, because the success or failure of their investments will hinge on how enterprises will rise to address this challenge.

Investors are eager to see AI companies build trust. What specific actions can companies take to improve AI safety, transparency, and responsible data practices?

AI providers’ true differentiation lies in their ability to harness data effectively – including proprietary and sensitive data – which will require working with third parties like their customers or data providers. Some companies will cut deals with AI providers to make their data available for training models – like Dotdash Meredith (a magazine publisher) did with OpenAI. But companies with more sensitive data likely won’t, or can’t, take this route.

Those companies will have to leverage what I call “Secure Collaborative AI,” which leverages privacy enhancing technologies (PETs). Just like the world of ecommerce was only unlocked after everyone was satisfied that their credit card transaction was protected online, so it will be with the world of AI, where PETs will unlock the true value of AI by protecting the data and the models when organizations collaborate, allowing them both to be used to their full extent.

You don’t even have to take my word for it – this is already being legislated. President Biden’s Executive Order on AI trust and safety directs government agencies to use these technologies to protect data while deploying AI. Maybe even better than that, Apple is taking action on this, evidenced by its announcement of Apple Private Cloud Compute, which uses a specific type of PET to protect user data.

Practically, this type of technology gives users an ability to monetize AI models while protecting IP, improve models by accessing better data that might not be publicly available and help their customers derive better insights by unlocking sensitive data that can’t be used today for privacy and security reasons, allowing them to personalize AI models to each of their customers.

At the end of the day, AI models are only as good as the data they’re trained on, and the main blocker to data access is privacy and security. Not only do investors have to be mindful of this when they’re looking at this space, but also the teams building AI applications must ensure that they are proactively solving the problem of data access and analysis by using technology.

What’s a news headline you are keeping an eye on?

There is a wealth of negative headlines to choose from nowadays, so I’ll take the opportunity to highlight a positive one: the House of Representatives passing the Privacy Enhancing Technologies Research Act. This law follows up on President Biden’s Executive Order on the “safe, secure, and trustworthy development and use” of AI, and empowers the National Science Foundation to pursue research to mitigate individuals’ privacy risks in data and AI. It also provides research, training, standards and coordination across government agencies to develop PETs. In other words, this is one of the ways that AI will be made safer, and that’s exciting!



Source link