People Process Plant

Reflections of an SCE Verifier...

Written by Sunny Pillay | Oct 22, 2025 12:19:49 AM

It has been almost 10 to 15 years since Major Hazard Facility (MHF) regulations were implemented across New Zealand and Australia respectively. While the legislation and terminology differ between countries and states, e.g. no Safety Critical Element (SCE) terminology in onshore AU MHF regs, the intent mostly remains the same around how controls for major incidents (MI) should be managed. And from my observations, a gap still exists in critical control (hereon referred to as SCE) management and safety management system (SMS) implementation.

This is also evident from the recurring questions, issues, and inconsistencies across all aspects of SCE management that continue to arise during regulator inspections, industry forums, safety assessments, and safety case revalidations.

With this in mind, I thought it timely to reflect and capture some of the observations and lessons learned from the SCE verification and SMS auditing activities conducted by Safety Solutions with various companies across Australia and New Zealand.

The sections that follow share common challenges, lessons learned, and practical observations from my work across multiple sites. Based on my experience as a verifier, ensure that the following is addressed as a minimum as part of your SCE management process and prior to engaging your independent SCE verifiers.

SCE Allocation

In terms of definition, the examples in the picture below aims to address the first common inconsistency which is the distinction between MI controls, SCE and SMS elements.

 

The “chicken and egg” debate does not apply when it comes to allocating SCEs. To meet the first part of the regulatory definition, that its purpose is to prevent or mitigate an MI, SCEs should be allocated after the safety assessment process where MI hazards and scenarios have been evaluated.

When SCEs are identified before a proper safety assessment, or through an assessment lacking clear line of sight between the control, the cause and the MI scenario, the result is predictable; too many non-MI-relevant SCEs, or genuine SCEs missed altogether. Both outcomes are often called out by regulators, resulting in directives to revalidate not only the SCE list but the underlying safety assessment process.

A robust safety assessment process enables you to allocate MI controls and SCEs in the proper context, and just as importantly, to demonstrate where something was rejected as not serving an MI purpose. This addresses one of the most common regulator questions: “Why wasn’t this considered an MI control or SCE?”, often followed by a directive to reassess it under a SFAIRP demonstration.

SCE Performance Standards and SCE Integration

Performance Standards define the functional requirements, assurance tasks, and criteria for MI controls and SCEs. Surprisingly, these vary widely across companies, even for similar control types. The better examples I’ve seen share one key feature: objective acceptance criteria that clearly define what constitutes a pass or fail from assurance activities.

SCEs must also be properly integrated into existing systems, such as asset and maintenance management, document control, incident management, management of change, and audit processes. This ensures ongoing oversight throughout operations.

There’s an ongoing debate around the need for physically tagging or labelling SCEs in the field. While often dismissed as impractical for sites with thousands of critical assets, several sites have chosen to label retrospectively after incidents involving unrecognised critical equipment. My recommendation: assess practicality first: type, number, and accessibility of assets - before concluding “it’s too hard” or ”there’s too many.”

SCE Operations and Maintenance

From my experience verifying SCEs, I’ve seen that the operation and maintenance phase is where many gaps emerge, even when SCEs have been correctly allocated and integrated.

Some recurring issues I’ve observed include:

  • Operational risk assessments: When SCEs fail or do not operate as designed, the associated operational risk assessments are often weak. I’ve seen teams focus heavily on likelihood judgments, arguing that “the risk is lower since it’s only valid for a week or a month.” In my view, once a control is designated as critical, the actual likelihood of the MI after its failure becomes secondary. The assessment should focus on whether it’s still safe to continue operations using the remaining controls, and what interim risk treatment measures are required, and importantly, who in the organisation is accountable for monitoring this situation.
  • Incident investigation should always follow any SCE failure, to understand the cause, potential for escalation, and meet legislated notification requirements. A structured investigation will prevent repeat failures and strengthens overall control reliability.
  • Documentation and traceability: A common issue is a lack of clear, accessible documentation. SCEs cannot be verified if there is no evidence that they were designed, maintained, or managed appropriately. I’ve seen cases where companies had to retrospectively design or even replace an SCE because suitability could not be demonstrated due to a lack of documentation. From a verifier’s perspective, missing documentation is a red flag; it often signals deeper systemic issues in SMS implementation.
  • Third-party maintenance oversight: Another recurring theme is the reliance on third-party contractors without adequate review of their results. For example, relief valve test certificates or instrument calibration results often include critical recommendations buried in fine print at the bottom. I’ve had to dig through multiple reports where these recommendations were overlooked simply because no site engineer had verified and actioned them. Ensure that third-party outputs are reviewed and integrated into the site’s action management system.

Find out about our Operational Risk Assessment Course HERE! Find out about our Functional Safety Basics Training Course HERE! We offer a Process Safety Incident Investigation course HERE!

SCE Verification

The process for verifying the initial and ongoing suitability of SCEs is well-documented in industry guidance, including papers from the Energy Institute, WorkSafe NZ, and SafeWork Australia. I’ve also explored this in more detail in a previous blog post on SCE verification HERE.

Despite this guidance, approaches remain inconsistent across companies. Some common issues I’ve observed include:

  • Assuming initial suitability is a one-time task: Some organisations rely entirely on Management of Change (MoC) processes to capture verification of design or operational changes. While this sounds reasonable, MoC maturity varies widely. Energy Institute guidance and other major companies also recognise the difference between the extent of verification activities during the “Design” vs “Operational” phase of a facility. In my experience, it’s best to align “Operational” phase verification with the typical five-year safety case revalidation, using the initial verification as an opportunity to confirm key areas of design suitability plus emerging trends that may negatively affect the SCE.
  • Narrow verification scope: I frequently see SCEs being labelled “verified” after only reviewing design or maintenance records. True verification encompasses much more: ensuring the SCE was allocated via a robust safety assessment, integrated into operational and maintenance systems, has an appropriate performance standard, where practical verifying the SCE's installation in the field, and is managed according to the safety management system.

In many cases, effective verification requires a team of competent persons, not just a single Independent Competent Person (ICP), to cover the full technical range of SCE management.

Governance

Governance of SCEs is typically achieved through Key Performance Indicators (KPIs) that track both the performance of the SCEs and the supporting elements of the SMS. The right KPIs provide insight into whether each control is meeting the intent of its Performance Standard.

In practice, I’ve seen sites try to monitor too many KPIs, which can overwhelm teams and reduce the value of the information collected. From my experience, being selective and targeted is far more effective, so start with a small set of meaningful KPIs.

 

A practical starting point for SCE-related KPIs includes:

  • Failures during operation or maintenance
  • Overdue maintenance
  • Number of SCEs bypassed, currently/ in the past month
  • Demands placed on SCEs while in service
These indicators are not only simple to monitor, but they also highlight real operational risks and help prioritise attention where it matters most.

 

Conclusion

Managing critical controls effectively is where effort in assessing risks and major incidents translates into improved safety. While this article focuses on SCEs for MI prevention, with appropriate scaling the principles can apply to other critical controls and risk categories outside of People Safety.

Implementing a strong control framework requires cross-department coordination and effort, which can be resource-intensive. Unfortunately, this implementation can often overwhelm operators, especially when the task and deadlines are compliance driven. My recommendation is to start with a focused, achievable scope, demonstrate success, and then expand the number of controls and governance measures in a phased approach. This iterative approach helps embed the right culture and keeps controls reliable throughout a facility’s lifecycle.

 

For more information about Verification of SCE have a look at: CONTROL VERIFICATION

Ensure your safety systems work when they’re needed most. CONTACT SAFETY SOLUTIONS regarding Control Management guidance and Verification services.