Cybersecurity failure may be among the top global challenges over the next decade, according to the World Economic Forum’s Global Risks Report 2021. Increased use of AI opens new possibilities to businesses, but it also brings new forms of vulnerabilities, threats, and risks.
If your company is considering or undertaking an AI (or machine learning) project, here are five risks you should watch out for in 2022.
Machine learning projects typically require tons of data to support the learning process, but it isn’t always clear how AI systems collect, use, and process personal data, which may pose major privacy concerns. To avoid the risks of data breaches or non-compliance, establish adherence measures with current data privacy laws or regulations like the California Consumer Privacy Act (CCPA) and various other future data privacy regulations.
Define your project’s purpose, the application’s intended use, and the associated data processing—from development to deployment. Use realistic and desensitized data for the development and testing phase to address privacy issues. Take appropriate measures to protect any sensitive residual data from privacy attacks, especially during development. Above all, obtain stakeholder buy-in across your organization, especially teams and functions that deal with sensitive data.
- Data Contamination
Data quality is key to the effectiveness of an AI project. Large and complex data sets (because of having so many disparate sources of input) can be more vulnerable to data manipulation and poisoning attacks that corrupt machine learning models. Due to the evolutionary nature of AI applications, these attacks can be hard to detect.
It’s necessary to implement security measures such as perimeter security, compartmentalization, authorization, and access management from the start, regardless of the data environment. When it comes to technologies not commonly used in your company’s standard development frameworks, be sure to evaluate and validate security levels before rolling them out.
To guard against attacks, take measures to maximize the reliability of your learning set. Extend the learning time to include a wider training set, introduce various control steps throughout the learning phase, and define safe learning practices for developing the application.
By securing your big data platform and adopting healthy practices early on, you can create a solid foundation for scaling up your machine learning initiative.
- System Manipulation
Even after the learning phase, trained AI models are still vulnerable to attacks, like the introduction of deceptive data, where the attacker’s goal is to shift the application’s behavior to their advantage or retrain the model to produce faulty outputs. This means applying secure development best practices such as Open Web Application Security Project (OWASP) rules provides limited defense against attacks on AI. Comprehensive machine-learning-specific security measures should cover all areas of your application: from input to processing to output.
Secure the data acquisition chain with end-to-end protection and set up mechanisms to detect malicious inputs. To make processing and predictions reliable, you can leverage several methods and techniques, including adversarial training, defense distillation, adding random noise to your data, and using a learning ensemble. Safeguard your outputs by using gradient masking, protecting the output chain against access attempts, moderating the output data, and checking for suspicious outputs.
- Transparency and Interpretability
A lack of transparency around an AI application can undermine your risk management efforts and further lead to legal consequences. Not all machine learning models have the same level of sensitivity and openness.
To help you track and understand your system’s learning progress, consider these suggestions. Determine the level of interpretability required for your machine learning algorithm to ensure it meets regulatory standards. Develop security guidelines and define a resilience strategy according to the type of AI project, taking into account the complexities and particularities of machine learning. Make a backup copy of the model’s previous states and keep an audit trail of the parameters that influence the decisions the algorithms make in case of an investigation.
- Third-Party Risks
The process of building an AI application might involve an external third party, like partners or vendors. While shared training can produce a more powerful model, there are risks and concerns that should be addressed during the contracting phase, such as:
- Who owns the data and trained model? At the end of the contract, who will take over?
- What might happen if a direct competitor decides to invest in AI—and buys out a supplier who’s been working as your partner for the last few months?
- Is the supplier applying good practice in compartmentalizing its infrastructures and applications?
- Could the sharing aspects of model training lead to the disclosure of confidential data or customers’ personal information?
New Vulnerabilities Continue to Emerge
We’re still in the early days of understanding AI attack and defense mechanisms. As AI technologies continue to mature, it’s important to be prepared for the associated risks so you can maximize the value of your AI initiatives.
Let our cybersecurity experts support you in securing your AI projects, helping you stay compliant and protected against machine-learning-specific threats.Learn More About Wavestone’s Cybersecurity Advisory Services
Have a Question? Just Ask
Whether you're looking for practical advice or just plain curious, our experienced principals are here to help. Check back weekly as we publish the most interesting questions and answers right here.