Baptistin Buchet
Baptistin Buchet

Artificial intelligence stands as a double-edged sword, promising to be the ultimate efficiency tool while simultaneously presenting new cybersecurity threats and enhancing old ones.

With superior attack quality, scaled-up attack volume, and accelerated adaptations to defenses, the risk of unauthorized access to sensitive information, financial loss, and reputational damage has never been higher. Emerging malicious AI models like WormGPT and FraudGPT are only a few of many examples of dedicated programs facilitating cybercrime.

It is critical for businesses considering GenAI adoption to understand the challenges it poses, and to formulate cohesive approaches to manage risks effectively. This blog examines 3 major GenAI security risks and 3 strategic practices to mitigate them.


Cybercriminals are boosting their efficiency with AI

Targeted disinformation

One of the most significant risks is the potential for GenAI to produce harmful or misleading content. “Deepfakes” mimic observed behaviours – from content styling to images and voice audio – to create realistic phishing lures by impersonating VIP targets. Advanced models like DarkBERT even feature integration with Google Lens, opening up mainstream images and texts for exploitation.

Improved attack capabilities

GenAI has also been used to enhance and amplify existing threats, raising the cadence and adaptation rate of conventional attacks. Malicious uses include:

Generating and modifying malware code to identify and take advantage of exploits in a company’s cybersecurity architecture.

Building increasingly sophisticated, targeted phishing emails and entire campaigns.


GenAI models are also prime targets for cyber attackers

The GenAI reliance on large volumes of data presents vulnerabilities to attacks against a model’s learning data sets.

3 attack vectors stand out:

Data poisoning sabotages AI programs by contaminating learning data with unbalanced inputs. Contamination skews biases and learning assumptions, preventing effective results.

Attackers can reverse-engineer defenses by questioning the AI directly, using program replies to extract sensitive information and deduce how the model was trained.

“Evasion” attacks introduce errors in GenAI operational processes to compel particular results, provoking outcomes beyond the program’s scope.


Companies must navigate new data loss, privacy, and compliance risks

Corporate use of GenAI can create data loss, privacy, and compliance risks that can lead to harmful leaks, regulatory violations, and litigation:

Confidential data leaks of Personally Identifiable Information (PII) remain privacy and compliance risks, with many commercial GenAI platforms lacking secure data protection controls.

GenAI learning data may produce outputs based on stolen intellectual property, incurring costly legal action and reputational damage.

Careless program use can accidentally expose sensitive organizational assets like proprietary source code, regulated data, passwords, and keys.


Adapt and manage: 3 best practices to counter GenAI cyber attacks



Define costs and benefits early with accurate business use cases

CISO in different industries and countries share several opinions on GenAI technologies:

Most believe ChatGPT is no worse than many other websites but requires specific communications approaches to stay safe and effective.

A strong minority claims the model is a “game changer” and fully advocates its use with effective controls.

Only some perceive the technology as dangerous and believe it should be forbidden.

Although optimizing operational defenses is the most obvious way to impose controls, defining the exact role of GenAI within the organization with an accurate business case can eliminate threats early by minimizing potential attack surfaces.

Business cases for GenAI adoption should determine the degree to which the technology is relevant and whether the benefits outweigh the risks. Thorough risk analyses of use cases should include:

A granular scope of work to identify risks and appropriate security measures ahead of time, with hard limits to minimize threat surfaces.

A clear action plan and roadmap to implement and secure GenAI use.

Rigorous access protocols to identify, classify, and regulate necessary users.



Optimize existing operational defenses

Many conventional defensive measures remain effective against AI-related attacks. GenAI attack vectors are not new: CISOs already face daily phishing attacks and malicious code payloads – threats cybersecurity teams know how to counter with monitoring and remediation.

Specific tools and processes to consider include:

Detection probes deployed from Security Information and Event Management (SIEM) platforms to enable rapid threat identification, analysis, and response.

Standardized crisis contingencies and emergency shutdown procedures.

Threat containment with “Red Buttons” to quickly isolate compromised networks and assets.

Defensive implementations to fortify AI processes in particular include:

Filtering datasets for contamination and actively monitoring outputs.

Layering GenAI processes with adversarial learning models and defensive distillation.

Auditing implemented security capabilities regularly with internal “AI red teams”.



Educate employees on the risks introduced and enhanced by GenAI

Human error remains the highest cause of data breaches in organizations – a risk that can only worsen with the integration of GenAI in operational processes. Training and transparent communications of appropriate behaviour and expectations can minimize human error and accelerate threat responses.

False flag phishing campaigns can also effectively demonstrate how GenAI models can enhance malicious attacks to employees and other internal stakeholders.


GenAI’s true security challenge lies in identifying an enterprise’s precise needs and designing defenses to suit. Expert advisory is recommended to clearly define GenAI’s role in your business and implement the security measures required to secure it.

Contact a Wavestone expert for specialist guidance on identifying enterprise GenAI security requirements and securing GenAI solutions.


Baptistin Buchet
Head of Cybersecurity & Digital Trust

Baptistin Buchet leads the cybersecurity activities at the Wavestone US office in New York City. He graduated from EPITA, the premier graduate school of computer science and advanced technologies in France, majoring in systems, network, and security. Certified in both CISSP and CISM, Baptistin has, over the years, developed extensive expertise in risk management, security architecture, crisis management, and emerging technologies. He is also a frequent media and conference speaker and gives university lectures at New York University and other European-based schools.

6 Operational and Strategic Benefits of GenAI-Driven Tech Procurement

Nov 30, 2023

The procurement of technology services stands at a fascinating crossroads, with the introduction of generative AI marking a transformative shift in how organizations approach this critical function. Read our blog for 6 key operational and strategic capabilities enabled by GenAI-driven tech procurement.

Navigating Complex Procurement: 5 Challenges and Best Practices

Nov 23, 2023

Effective procurement drives efficiency, cost savings, and supply chain reliability, and comes with its fair share of complex challenges. Overcoming them requires a multifaceted approach integrating strategic thinking, innovative solutions, collaboration, and proactive risk management. Read our blog for a detailed examination of 5 major procurement challenges and top-line strategies for success.

Have a Question? Just Ask

Whether you're looking for practical advice or just plain curious, our experienced principals are here to help. Check back weekly as we publish the most interesting questions and answers right here.

Ask Wavestone