How AI is Changing Cybersecurity Risk – for Better and for Worse

How AI is Changing Cybersecurity Risk – for Better and for Worse

Artificial intelligence (AI) introduces a range of new attack surfaces and fresh governance obligations. Business and IT leaders face three simultaneous challenges: Protecting AI as an asset, utilizing AI to defend against cyberattacks, and defending against AI-enabled attackers.

The stakes are high. The World Economic Forum’s Global Cybersecurity Outlook 2025 identifies emerging technologies (especially generative AI), supply-chain interdependence, and increasing cybercrime sophistication as the top stressors for security programs. ENISA, the European Union Agency for Cybersecurity, has published its latest Threat Landscape, which states that information manipulation—now increasingly AI-enabled—is a growing concern alongside ransomware and supply-chain attacks.

Here are some of the most common AI-driven threats today—and pragmatic countermeasures:

AI-enabled social engineering and deepfakes

Generative AI models enable attackers to personalize attacks at scale, allowing them to create chat-style phishing schemes that adapt in real-time based on user replies. Deepfake voice and video are a top concern for many companies. AI-generated impersonations of company leadership add urgency and credibility for payment fraud and requests for system access.

To mitigate these threats:

  • Require verification for high-risk requests (vendor bank changes, CEO “urgent” approvals).
  • Expand cybersecurity training simulations beyond email to include voice and video scenarios.
  • Implement security processes so a convincing voice or video alone cannot authorize critical actions.

Shadow AI and data exposure

Many breaches won’t start with exploits; they often begin with well-meaning employees inadvertently pasting sensitive data into unmanaged AI tools. Shadow AI (unsanctioned use of AI models and services) creates uncontrolled data copies, uncertain retention, and cross-border exposure. For example, data entered into many AI tools can be reused for training their large language models (LLMs) and potentially revealed in future outputs.

That means confidential business data shared with AI tools might be stored on third-party servers (possibly in another country), used to train public AI models, and shown to other users. If your team copies source code, customer PII, internal reports, or product roadmaps, they may be unknowingly handing over your company’s most valuable assets.

Be aware that most free AI tools often have a default setting that allows them to train on your data. Many AI tools store chat records, including full conversations, audio transcripts, source code, and uploaded files.

AI tools hosted in other countries might be subject to data privacy laws that are different from those you are accustomed to. The Chinese AI tool DeepSeek, for instance, poses a particular risk because companies are required to provide the government with access to any stored user data, including foreign business content.

To mitigate these threats:

  • Inventory AI usage within the organization (“Who’s using which tool? What data are they feeding these tools?”).
  • Are these AI tools hosted in challenging geographies?
  • Use enterprise AI deployments with tenant isolation and retention controls.
  • Block high-risk AI applications and log prompts and outputs for audit.

AI agents and identity: A new governance gap

AI agents are autonomous, decision-making systems that can act independently. As organizations experiment with autonomous/agentic AI to orchestrate tasks, those agents often receive tokens, API keys, and data access, becoming identities that must be provisioned and governed just like a user.

While defenders may benefit from autonomous agents for threat hunting or response, attackers can deploy adversarial agents that continuously probe, pivot, and exploit without human intervention. Agentic AI is driving the development of new defense models centered on AI detection and response (AI-DR).

To mitigate these threats:

  • Treat agents as first-class identities using lifecycle management, least-privilege scopes, and activity logging.
  • Limit agent privileges and capabilities and require approvals for high-impact actions.

AI in OT/ICS and physical operations

As factories, utilities, and logistics integrate AI into control and monitoring systems, the impact of cyberattacks now extends into the physical world. Cyber criminals are targeting operational technology (OT) and industrial control systems (ICS) by attacking infrastructure (for example, IoT devices such as sensors, access controls, and HVAC monitors) to disrupt or damage critical business or government operations and services.

To mitigate these threats:

  • Monitor industrial system activity for anomalies.
  • Enforce strong network segmentation to minimize lateral movement and prevent attackers from accessing other systems.

Get a handle on your AI security

AI is a valuable tool that can help lighten the load for your employees. Additionally, it can be used to effectively identify and mitigate cyber intrusions and attacks more efficiently than most manual techniques. However, malicious actors are also using AI to make their jobs easier, from identifying weaknesses in your network to shadow AI to deepfake phishing schemes.

Contact ArcherPoint by Cherry Bekaert to learn how we can help you implement your AI strategy effectively.

Stay Informed

Choose Your Preferences

"*required" indicates required fields

This field is for validation purposes and should be left unchanged.
Subscription Options
By subscribing you are consenting to receiving emails from ArcherPoint and agreeing to the storing & processing of your personal data as described in our Privacy Policy. You can can unsubscribe at any time.