Feature

MASS: balancing innovation with safety and security

Keri Allan explores the development of maritime autonomous surface ships, and the safety protocols, regulatory challenges and potential impact on maritime security surrounding them.

Main image credit: 

Maritime autonomous surface ships (MASS) introduce safety and security challenges that differ significantly from conventional maritime operations. One of the main hurdles is ensuring systems can detect, interpret and respond to changing conditions as a human crew would. This is because once you move to autonomy, in-situ human judgment that uses instinct and context disappears.

“The challenge is that much of a mariner’s decision-making is implicit and situational – things they do automatically without even realising,” says Professor Kevin Jones, deputy vice-chancellor of research and innovation at the University of Plymouth.

“In an autonomous environment, all of that needs to be made explicit through code or training data. Even basic rules like collision regulations can be misinterpreted if taken literally, whereas a human understands the intent behind them. Getting that contextual understanding right is one of the biggest challenges.”

Cybersecurity is of course a key concern, as is connectivity – if the link between a vessel and a remote operations centre (ROC) is lost, then systems must still be able to operate safely.

“Communications, software and control systems must be resilient to attack and failure,” says Tony Boylen, principal specialist in assurance of autonomy at Lloyd’s Register. “Addressing this requires far more systematic testing and fault-tolerant design than for a conventionally-crewed ship.”

Lessons learnt

Real-world trials of MASS, such as the Yara Birkeland and Mayflower,have given valuable insights into potential challenges and solutions.

One key lesson from the Mayflower was that failure scenarios are far harder to manage than normal operations. In one incident, the vessel was disabled by the failure of a relatively minor component – something not detected because no sensor had been installed to monitor it.

In a conventional crewed vessel, a human might notice something as subtle as a strange smell or vibration and act accordingly. In an autonomous system, unless a sensor has been specifically placed to detect that issue, the failure can go unnoticed until it’s too late.

This highlights a major challenge: designing systems that can identify, diagnose and potentially repair faults without human intervention. The solution lies in building resilience and redundancy into critical systems – drawing on lessons from the space sector, where inaccessibility demands that systems are robust, self-monitoring and capable of withstanding unexpected failure without immediate human help, Jones notes.

Evolving risk assessment

In terms of vessel risk assessment, the fundamental principles don’t change with autonomy, but the factors influencing risk do. While crewed vessels rely on human judgment and experience, autonomous systems must replicate this through sensors, training data and code. Regulators and classification societies will play a key role in adapting language and frameworks to suit this shift.

Risk assessments for MASS must cover more complex scenarios than those for conventional ships and consider not just the vessel but the entire ecosystem, including connectivity, port infrastructure and ROCs.

“Traditional assessments focus on mechanical failures and human error. Autonomous systems introduce new categories of risk including software errors, sensor limitations, algorithmic decision-making errors and cyber threats. These apply to both the vessel and the relatively new concept of ROCs,” says Boylen. 
 
“To manage these challenges, newer methods of system-based analyses must be introduced. Hazard identification and detailed scenario mapping, looking at both traditional risks and autonomous-specific hazards such as sensor failures and algorithmic errors, are just the start. From here, quantitative and qualitative risk profiles can be developed, grounded in the vessel’s operational design domain (ODD). These help simulate realistic worst-case situations and test whether safety margins hold-up under stress.

Developers also understand that in order for MASS to become widely accepted robust cybersecurity is critical. In the case of Kongsberg Maritime, which designed the REACH REMOTE 1, operation occurs within an enclosed network, with end-to-end encryption for all communications between the ROC and vessel, and strict access rights.  
 
“You need firewalls in place to make sure that nobody can remotely access the vessels, but it's also important that these firewalls don’t compromise latency. We’ve done a lot of latency testing at the location of the ROCs to test current limits,” notes Ville Vihervaara, remote & autonomous VP at Kongsberg Maritime.

Building public trust

Public trust in MASS depends on demonstrable safety, transparency and communication of the benefits. That means going beyond technical validation – operators must also show how systems are tested, certified and governed.

As Boylen notes, trust can be strengthened by mirroring lessons from the autonomous vehicle sector, where confidence grew through robust oversight and accountability. For MASS, this includes sharing performance data, disclosing incidents and explaining lessons learned – especially when things go wrong.

Transparency around cybersecurity protections, remote-control protocols and fail-safe mechanisms will be essential. So too will be stakeholder engagement: mariners, unions, port authorities and the wider public must all be part of the conversation.

“The maritime sector must establish a verifiable evidence trail that proves systems work safely in diverse real-world conditions,” Boylen emphasises.

Regulatory frameworks are evolving

Trials also brought to light the need for improved regulations.  
 
“Regulation is a mess. What you’re allowed to do, where and under what circumstances is incredibly inconsistent – even within a single harbour. Even in experimental settings, no one really knew what was permitted,” says Jones.
 
“If a vessel is sailing in Australia, remotely controlled from Norway, Finland or the UK and something happens to this vessel, we need to be very clear on who’s responsible, and to know the flag state’s position,” adds Vihervaara.  
 
“There are already some bidirectional agreements in place, such as the one between Norway and Denmark, which sets a clear regulatory framework for autonomous vessels. But we still need more clarity from the IMO. Until then, we’ll see adoption mainly in coastal and inland operations, where national regulations can be applied more easily.”

Progress has been made in the IMO’s goal-based MASS Code however, with most chapters of the draft non-mandatory code now complete and only the Human Element section under development. 
 
“By September 2025 (MASS ISWG 4) the draft code will be further refined and by MSC 111 next May the non-mandatory code is due for formal adoption,” notes Boylen. “From December 2026, an ‘experience building phase’ will launch, where the focus will shift to creating a mandatory code, with adoption by July 2030 and entry into force in January 2032.”

As autonomous technology advances, the biggest breakthroughs may not be in hardware or AI, but in regulation, communication and trust. Creating a credible evidence trail, built on thorough testing and transparent reporting, will be key to public and stakeholder confidence.

Or, as Jones puts it: “Safe autonomous operation depends not just on the technology – but on trust in secure, well-defined systems.”