Lloyd's Register
The American Club
Panama Consulate
London Shipping Law Center
Home ShipmanagementClassification Societies DNV Cyber Threat Insights, March 2026

DNV Cyber Threat Insights, March 2026

by admin
8 views

Here is a selection of recent threat insights curated by our specialists. In this edition, we have a particular focus on threats associated with AI. We hope you find this valuable:

You can view these threat insights in full below and can download them as easily shareable PowerPoint slides.

Trends

AI sparks new era of constantly-changing malicious code

The first case has been reported of hackers using AI tools to generate dynamic malware campaigns on the fly.

Before AI, malware was implemented using fixed, hard-coded scripts. Google’s Threat Intelligence Group reports that hackers have used its Large Language Model (LLM) Gemini to generate just‑in‑time (JIT) malware such as PROMPTFLUX and PROMPTSEAL to generate malicious scripts that are constantly changing.

Traditional security systems detect patterns and other signatures in code, but do not recognize malware with the ability to constantly change and rewrite its own instructions. Defending against malware has just got a lot harder and will require new approaches to detecting threats.

Discuss:

  • Are we investing fast enough in the defensive aspects of AI to keep pace with cyber attackers using AI offensively?
  • Are our defenses prepared for attackers who can generate new malware variants in minutes, not weeks?

Recommendations:

  • Deploy behaviour‑based threat detection, not signature‑based tools, to catch rapidly mutating malware that AI can generate on demand.
  • Continuously validate and harden your software supply chain, ensuring attackers cannot insert or update malicious components that evade traditional scanning.

Are we keeping pace with AI?

Prompt injection attacks successfully penetrate most LLMs

AI systems deployed by companies, such as chatbots and internal Large Language Model (LLM) powered assistants, are becoming attractive entry points for cyber attackers. Techniques like prompt injection give attackers a way to manipulate these systems so that they disclose sensitive company information or provide access to internal resources.

With a prompt injection attack, an attacker can hide instructions inside seemingly normal content. If the chatbot then processes that information, the attacker may be able to make the system reveal confidential documents or expose internal data, essentially using the AI as a backdoor into the business. Prompt‑injection attacks succeed against 56% of tested LLMs, highlighting how frequently these systems can be subverted, according to technology news site ZDNET.

Discuss:

  • Are we exposing our organization to new risks by giving AI access to sensitive systems and data?
  • Do our employees know how to responsibly use LLMs – and which ones?
  • Would we recognize if an attacker manipulated one of our LLM‑enabled systems?
  • What’s the security posture of our non-human identities?

Recommendations:

  • Treat your LLM-connected services such as chatbots and connected APIs as attack surfaces:
  • Enforce authentication, authorization, and rate limits
  • Limit sensitive data exposure
  • Avoid overreliance on prompt filters
  • Continuously test and monitor.

Can our chatbots be subverted? 

Banning AI could be biggest threat of all Companies banning generative AI in their workplace due to security concerns could be creating an even greater risk.

Use of Generative AI in the workplace is skyrocketing and employees without an enterprise version may use unofficial AI tools, or ‘shadow AI’, to increase their productivity.

Shadow AI exposes organizations to significant risks, including data leakage when employees unintentionally feed sensitive information into unmanaged AI tools.

It can also result in external AI models absorbing a company’s intellectual property into their training data, increasing the likelihood of inadvertent disclosure of sensitive information.

Discuss:
– Are we creating a more hostile organization by prohibiting generative AI services?

– Are our colleagues using shadow AI to increase efficiency?

– Do colleagues understand the risks of shadow AI for data leakage, 3rd party model training, and risks from poor data or decision quality?

Recommendations:
– Provide an enterprise license for a major LLM and enforce SSO and encryption
– Read our article on the threat from shadow AI including recommendations.
– Incorporate identifying shadow AI into your company’s AI governance.

Could banning AI make matters worse?
Hacktivists escalate attacks on critical infrastructure

– Hacktivists are cyber attackers motivated by political or social objectives who may be inspired, sponsored, or given sanctuary by nation-states. Increasingly, they are escalating from DDoS-only campaigns to attacks on critical infrastructure.

– In the last quarter, alleged victims have included remotely accessible systems across the energy, government, and financial sectors in Europe.

– In December 2025, the FBI warned that pro-Russian hacktivists were targeting critical infrastructure in the US and allied nations, marking an escalation from surface-level disruption to potential physical impact.
– The European Commission has announced a new cybersecurity package aimed at strengthening the EU’s defences across essential services.

Discuss:
– Are we prepared for hacktivists who are highly motivated, increasingly sophisticated and focused on disruption rather than financial gain?
– Do we fully understand the exposure of internet‑facing OT/ICS systems?
– What are our redundancy and fail‑safe mechanisms if an industrial process is manipulated?

Recommendations:
– Adopt mature asset management processes, including mapping data flows and access points

– Follow best practices for securing OT environments.

– Start with DNV Cyber’s Building a Robust OT Security Programme series

Download DNV Cyber’s Nordic Cyber Resilience research exploring the state of national cyber resilience in Norway and Sweden.

How secure is our critical infrastructure?

Vulnerabilities

Anthropic’s Claude used to automate multi-stage attack

AI chatbot ‘Claude’, made by US firm Anthropic, has been attacked by a China-linked threat actor.  The group manipulated Claude Code to conduct espionage against 30 major organizations including chemical manufacturing, large technology firms, financial institutions and government agencies.  

It marks a turning point where AI is conducting the majority of the attack itself, with minimal human intervention. As a result, the threat actor was able to identify databases, generate exploit code, conduct reconnaissance, harvest credentials, and create backdoors.

The attackers bypassed the guardrails by separating their attack into seemingly harmless tasks, allowing Claude to automate thousands of actions per second and operate with a level of speed and scale beyond human capability.

Discuss:

  • How vulnerable are our large language models (LLMs) to being broken into and exploited by cyber attackers?
  • Are we prepared for threat actors capable of launching thousands of automated actions per second?

Recommendations:

  • Ensure strong visibility of logs and implement threat hunting to spot evasive anomalies
  • Assess if suppliers and technology partners are prepared for AI‑enhanced threat actors.

Are we prepared for AI-enhanced attacks?

VPN credentials exploited as a top entry point

High profile ransomware groups such as Akira, Qilin and INC Ransom are leveraging leaked VPN credentials in their operations.

Akira exploited a gap in VPNs used by smaller companies, to gain access to the systems of larger companies that later acquired them. Even though companies patched the flaw, the attacks kept recurring as weak links were discovered, such as:

  • Old accounts that had been forgotten, such as administrator accounts from years ago.
  • Passwords leaked in past breaches.
  • Accounts not protected with Multi-Factor Authentication (MFA), such as local or fallback emergency accounts.

These unsecured “exception” accounts proved to be detrimental to the victim companies.​

Discuss:

  • Do we have VPN accounts that can access our systems without MFA – even in special or emergency situations?
  • Could we have acquired such accounts through M&A?
  • If someone stole a VPN password today, how quickly would we spot it and shut it down?

Recommendations:

Are we at risk from not-so-private networks?

Insider leak at CrowdStrike underscores the risk from the cybersecurity supply chain

Cybersecurity firm CrowdStrike confirmed that a “suspicious insider” shared screenshots on Telegram of internal dashboards with a cybercrime collective known as Scattered Lapsus$ Hunters. The cybercriminal group allegedly paid the insider $25,000.

The company states its systems and customers were never technically compromised, but as a provider of endpoint protection it highlights the broader vulnerability created when attackers target security vendors themselves.

CrowdStrike has handed over the case to U.S. law enforcement.

Discuss:

  • How well do we detect anomalous employee behaviour, including unauthorized screen captures, data exfiltration attempts, or suspicious access patterns?
  • What controls do we have to govern access via contractors, partners, or integrated services that could be abused as an entry point?

Recommendations:

  • Strengthen insider‑threat monitoring by deploying behavioural‑analytics tools that flag unusual activity patterns such as large data transfers, access at odd hours, or attempts to reach systems outside an employee’s normal role
  • Employ digital identity and access management controls to limit the impact of a compromised employee.

Do we have sufficient measures against insider threats?

Critical vulnerability affecting React Server Components and Next.js

A critical vulnerability dubbed React2Shell (CVE‑2025‑55182) in React Server Components and Next.js enables unauthenticated remote code execution on affected servers.

The vulnerability allows unauthenticated remote code execution by exploiting a flaw in how React decodes payloads sent to React Server Function endpoints. It is already under active exploitation by China-based threat groups such as Jackpot Panda.

React and Next.js power a significant share of cloud‑hosted sites, so unpatched instances will continue to be targeted by threat actors causing high‑impact compromise scenarios such as data theft.

Discuss:

  • Where are we running React Server Components or Next.js?
  • How could an attacker pivot if one of our frontend servers is taken over?
  • Are we validating framework and package integrity?

Recommendations:

  • Patch React and Next.js applications immediately
  • Map dependencies – do other packages/bundles have the vulnerable components?
  • Restrict public exposure of React Server Component (RSC) endpoints and place them behind authentication where feasible
  • Hunt for compromise indicators in logs and post-exploitation activity, such as reverse shells or cryptominers.

Do we have critical vulnerabilities in the cloud?

We hope that these threat insights inform your cyber priorities, encourage discussion, and make your organization more resilient.

Best regards DNV Cyber Threat Intelligence

View previous editions of threat insights:

DNV Cyber gathers and analyses intelligence from both open sources and our own resources. View our threat insights and security advisories. Our threat intelligence service provides deeper insights and curates intelligence specifically for your business.

You may also like

Leave a Comment