Which of the following methodology is concerned with noticing weak signals

Enhancing horizon scanning by utilizing pre-developed scenarios: Analysis of current practice and specification of a process improvement to aid the identification of important ‘weak signals’

Author links open overlay panelEmilyRoweaEnvelopeGeorgeWrightbPersonEnvelopeJamesDerbyshirecEnvelope

Show
Show moreNavigate Down

ShareShare

Cited ByCite

https://doi.org/10.1016/j.techfore.2017.08.001Get rights and content

Under a Creative Commons license

Open access

Highlights

We synthesize the extant literature to establish the relationship between Scenario Planning and Horizon Scanning.

We provide a step-by-step method for developing scenarios for subsequent Horizon Scanning.

The Backwards Logic Method for developing scenarios is best integrated with subsequent Horizon Scanning.

Abstract

This paper documents the Intuitive Logics scenario planning process and its relationship with horizon scanning activity in order to evaluate the separate and joint usefulness of these methods for anticipating the future. The specific objectives of this paper are to: (i) identify and differentiate scenario planning and horizon scanning methodologies (ii) discuss & evaluate their analytic underpinnings, and (iii) critically appraise their separate and combined value and effectiveness in relation to enhancing organizational preparedness for the future. Our analysis culminates with specifications to (iv) enhance the identification of ‘weak signals' in Horizon Scanning by utilizing a systematically broadened range of both negatively-valenced and positively-valenced scenario storylines.

  • Navigate LeftPrevious article in issue
  • Next article in issueNavigate Right

Keywords

Horizon scanning

Scenario planning

Weak signals

Recommended articles

Cited by (0)

Emily Rowe is a Business & Management PhD candidate in the Operational Research & Management Science Group at Warwick Business School, The University of Warwick, UK.

George Wright is a Professor at Strathclyde Business School, UK, and Associate Fellow at Warwick University, UK. He has published numerous papers on forecasting, decision-making, scenario planning and the use of expert judgement. He is co-author of the book ‘Scenario Thinking: Practical Approaches to the Future’, and an editor of the International Journal of Forecasting.

James Derbyshire is a Senior Research Fellow in the Centre for Enterprise and Economic Development Research, Middlesex University, UK. He has considerable experience in carrying out forecasting and planning research for governments and business. He worked for four years as a Senior Economist at the forecasting firm Cambridge Econometrics, and has also worked for RAND Corporation, the European Commission and Capgemini Consulting.

A framework to identify and respond to weak signals of disastrous process incidents based on FRAM and machine learning techniques

Author links open overlay panelMengxiYuabHansPasmanabMadhavErraguntlacNoorQuddusabPersonCostasKravarisb

Show moreNavigate Down

ShareShare

Cited ByCite

https://doi.org/10.1016/j.psep.2021.11.030Get rights and content

Abstract

Most incidents in complex systems such as process plants are not-chance events and weak signals emerge for a long time before incidents occur. It is necessary to identify and respond to the weak signals as early as possible for preventing incidents. However, in the era of Industry 4.0, recognizing weak signals from the abundance of data is challenging. Since the terminology “weak signal” was not precisely defined, the study first proposed a formal definition of “weak signal” and discussed its characteristics. Additionally, a framework was developed to address the challenges of observing, evaluating, and responding to weak signals in complex systems. The framework first utilized Function Resonance Analysis Method (FRAM) to determine the information to be collected for observing weak signals. Then, the relevance of weak signals and the corresponding responses were evaluated by utilizing Balanced Random Forest (BRF), Weighted Random Forest (WRF) and Decision Tree (DT) classification models. The case study of a hypothetical batch process showed great potentials for applying the framework in real operations. Based on the potential weak signals that were indicated by FRAM, probabilities of temperature deviations in the process were predicted with high accuracy by the optimal BRF model. Underlying weak signals and the corresponding responses to reduce the probabilities were identified from the DT.

Introduction

Process safety management programs have been improved and implemented more and more effectively over the years, however incidents still occur globally in process industries such as chemical plants and refineries (Halim and Mannan, 2018, Kannan et al., 2016). It is important to ask how we can further prevent incidents and understand incident mechanism more distinctly. Although randomness may play a role, incidents are often not chance-events that suddenly occur out of nowhere. Incidents follow incubation periods when chains of discrepant events develop and accumulate without notices (Turner and Pidgeon, 1997). If early warnings or weak signals are recognized and managed in time, the incidents could be prevented (Øien et al., 2011). The disastrous vapor cloud explosion at BP Texas City in 2005 was a tragic example that resulted from ignorance of weak signals (Hopkins, 2008; Le Coze, 2008; Nicolaidou et al., 2021). The incident investigation showed multiple weak signals existed in the plant before the incident occurred (Hopkins, 2008). The sight glass for verifying the tower level was not functional for years, and the malfunction of the level transmitter was identified before the start-up. With these failures, the overfill of the raffinate splitter distillation tower could not be noticed easily. Additionally, other weak signals such as overtime shifts, insufficient supervision and staffing made the severity even worse and the hydrocarbon overfill was not recognized until the pressure raised. However, all these weak signals had existed at the plant before the incident but were not resolved. Recently, Pasman (2020) gave a series of examples in which signals were recognized on the work floor but leadership/management did not act upon them. In order to prevent incidents, weak signals need to be recognized proactively and resolved as early as possible (Drupsteen and Wybo, 2015).

The concept of weak signals was first proposed by Ansoff & McDonnell for strategic planning and management, as “imprecise early indications about impending, impactful events” (Ansoff and McDonnell, 1990). Based on the definition, researchers emphasized that weak signals have crucial meanings for the future and provide a threat or opportunity for a business, but they are not mature at the time when they appear (Coffman, 1997, Holopainen and Toivonen, 2012). The concept has been applied in multiple domains for anticipating the future, such as technology foresight (Tabatabaei, 2011), defense (Koivisto et al., 2016), and natural disasters (Shelly et al., 2007). Since the definition can be different depending on contexts, the study summarized a few definitions and characteristics of weak signals in the domain of safety.

Vaughan (1997) defined weak signals as ambiguous information not showing clear threats to safety. The definition was expanded and specified later and referred a weak signal as a technical anomaly that “has no clear and direct connection to a potential danger, or that only occurs once and does not seem likely to occur again” (Vaughan, 2002, Vaughan, 2004). Similarly, Weick et al. (1999) defined weak signals as technical anomalies that are observed during operations. However, instead of only treating weak signals as technical issues, Guillaume (2011) pointed out weak signals could be re-occurring technical failures or deficiencies at upper management levels but they are ambiguous for anticipating future events. Additionally, Hollnagel (2004) proposed a definition of weak signals and revealed insights into why weak signals are ambiguous. Within a complex system, multiple functions are involved ranging from technological functions, human functions, to organization functions. Each function has its own performance variabilities. A weak signal can be a performance variability of one function while relatively the rest are noise. A weak signal is ambiguous since it does not cause detectable effects until it combines with noise and its constitution to a hazard is only amplified by the combination (Hollnagel, 2004).

“Weak signal” is not common terminology in the domain of safety. There are multiple terminologies resembling a similar spirit, such as precursors, early warning signals/signs and leading indicators. The terminologies bring more confusions to understand weak signals. For example, precursors have various definitions (Carroll, 2004, Körvers, 2004, Kunreuther et al., 2004, Saleh et al., 2013), and the one defined by Körvers (2004) is the same as the technical weak signal that was defined by Guillaume (2011). “Early warning signal” is treated as an interchangeable terminology of “weak signal”, which is an early warning of a precursor (Luyk, 2011). Additionally, a leading indicator is another form of an early warning to evaluate overall safety or risk in a system but is not interchangeable with a weak signal (Øien et al., 2011). To utilize leading indicators directly as signals for incident prediction is not feasible since the correlation between most leading indicators and event realization is unknown, vague, and unpredictable (Körvers, 2004, Luyk, 2011, Øien et al., 2011).

Even though weak signals are defined differently, the characteristics of weak signals that are addressed in literature are consistent. First, weak signals emerge for a long time before incidents and indicate potential risks of impactful events. They can be too early to be precise, but provide opportunities to respond (Guillaume, 2011, Luyk, 2011). Second, it is difficult to recognize weak signals since they cannot be interpreted in isolation and the impact of a weak signal is not noticeable until it combines with other signals (Brizon and Wybo, 2009, Hollnagel, 2004, Luyk, 2011). Besides, due to existence of noise, weak signals only can be interpreted after appropriate filtering and processing (Brizon and Wybo, 2009, Guillaume, 2011). Without tools to help people interpret weak signals, the interpretation process becomes subjective to individuals’ experience, knowledge, and willingness. When too much information is recorded in a system, weak signals are rarely picked up and connected (Guillaume, 2011, Vaughan, 1997, Vaughan, 2002). A few studies (Brizon and Wybo, 2009, Guillaume, 2011, Körvers, 2004, Luyk, 2011) were conducted aiming to help industries improve abilities to identify weak signals, but the studies mainly focused on improving organizational management qualitatively, instead of developing techniques to identify weak signals and predict incidents proactively.

On the other hand, in process control of continuous processes much work has been done on what is called Fault Detection & Diagnosis (FDD) to provide early warnings of process deviations. A few recent studies of FDD are by (Cheng et al., 2021, Fazai et al., 2019, Nhat et al., 2020), and much more studies are summarized in several review papers by (Alauddin et al., 2018, Arunthavanathan et al., 2021, Nor et al., 2019, Venkatasubramanian et al., 2003a, Venkatasubramanian et al., 2003b, Venkatasubramanian et al., 2003c). FDD aims to automatically recognize deviations of monitored process variables by filtering noise disturbances to provide early warnings through safety instrumentation system for abnormal situation management. Such early warnings are necessary to alert deviations that have already occur during operation periods, however, available time for interventions can be limited resulting in uncontrollable consequences. For the perspective of preventive safety, this work is beyond the scope of FDD to identify weak signals such as organization performance, equipment maintenance and human factors. Identifying such weak signals is critical for monitoring safety performances of organizations based on risks that are resulted from their interactions, and guiding organizations to make appropriate corrective actions for eliminating or mitigating degradations in safety management.

Since weak signals are defined precisely in the domain of safety, a formal definition of weak signals needs to be proposed. Among the existing definitions that were mentioned in the previous section, the definition proposed by Hollnagel (2004) provides the most clarity through emphasizing multiple sources of weak signals and indicating that a weak signal alone cannot cause impactful events. However, a weak signal and noise were defined as relative terms depending on focuses. The difference between weak signals and noise needs clarifications. Additionally, Hollnagel (2004) states a detectable hazard is caused by the interaction between a weak signal and its noise. According to the Cambridge dictionary, a signal is something that “gives a message or a warning”. It is more sensible by treating anything representing a sign of a hazard as a signal, instead of noise. In view of the definition by Hollnagel (2004) and the understandings of weak signals in other literature, this study proposes new definitions of weak signals and noise:

Weak signals are performance variabilities of technological, organizational, or human functions whose interactions combine clues or signs giving rise to early prediction of a future unexpected event/incident.

Noise is performance variabilities of the functions, which have no or negligible impacts on a future event and do not provide information about the future event.

Compared to the definition by Hollnagel (2004), the proposed definition differentiates signals and noise depending on whether they contribute to predicting future events. Additionally, the definition of weak signals addresses three aspects. First, weak signals are not restricted to be at technological level, and it could be performance variabilities of human or organization functions. Second, weak signals exist as combinations. An individual weak signal does not cause noticeable impacts until it interacts with others. Otherwise, it should be treated as a strong signal. Lastly, weak signals are early predictors. Even though “early” is a characteristic that has been well recognized in existing literature, the scope of weak signals in terms of how early is early was rarely clarified. Only Luyk (2011) explicitly defined weak signals need to be early enough to predict incident precursors, which are precisely defined as “a chain of adverse events flowing an initiating off-nominal event and that can lead to an accident” (Saleh et al., 2013). According to the concepts of weak signals and precursors, the study further precisely defined the characteristic “early” of weak signals. Since precursors are adverse events leading to an incident, in order to be early enough to indicate the precursors, weak signals are restricted to be conditions that could warn for adverse events. For example, relief valve that fails is a precursor event, but unqualified maintenance of the relief valve is a condition which can be a weak signal of the failure.

The life cycle from identifying to acting on weak signals consists of three stages (Brizon and Wybo, 2009, Holopainen and Toivonen, 2012).

1.

First, weak signals need to be observed in the system. The stage requires an organization to collect data of weak signals so they can be monitored for further evaluations.

2.

The second stage is evaluating the relevance of weak signals. At this stage, weak signals need to be interpreted based on knowledge and context to recognize their indications of future events. In most cases, one who interprets the weak signals is not the one who has power to act. Therefore, the relevance needs to be evaluated so that one could transmit the existence of weak signals to decision-makers when one thinks the relevance is strong enough or it meets any criterion in organizational standards.

3.

Once the existence of weak signals is transmitted to decision-makers, decision-makers could determine whether to act on the weak signals and to prioritize actions based on the significance of weak signals.

Unfortunately, in complex systems such as process plants, challenges exist throughout the entire life cycle. Given the rapid development of computing technologies in the past decades, digitized process plants have the capability to collect a tremendous amount of various types of data from process operations, control rooms, business and information systems (Pasman, 2020, Qin, 2014, Xu et al., 2015). Information to be collected depends on the purpose of uses and knowledge. It is questionable whether the collected data covers all the sources of weak signals that are relevant to a selected hazard. In order to ensure weak signals are observable in the first stage of the life cycle, a technique is necessary to guide organizations to decide what information to be collected. On the other hand, even though weak signals are observable, evaluating the relevance of weak signals based on knowledge and awareness of individuals in the second stage is still challenging. In process industries, operations involve interactions among technological, human, and organizational components (Leveson, 2004, Perrow, 1999, Rasmussen and Whetton, 1997). The interactions among different components arise high complexities since the interactions could aggregate into nonlinear and circular relationships leading to emerging failures (Cameron et al., 2017). The complexity could be intellectually unmanageable making weak signals hard to identify and interpret (Leveson, 2000). Without solutions to overcome the challenges for the first two stages, it cannot guarantee weak signals are identified and transmitted to decision-makers with an explanation of significance. Furthermore, it is also doubtful whether the responses to weak signals are effective without an appropriate interpretation. In this study, a framework was developed to identify weak signals by addressing the challenges throughout the life cycle, which can help industry management to prevent incidents proactively.

Learning from past anomalies and incidents plays a critical role in interpreting weak signals (Brizon and Wybo, 2009, Guillaume, 2011), but advanced techniques are necessary to extract the knowledge from massive historical data. Machine learning techniques have caught increasing attention in the past decades for item classification and pattern recognition. They learn patterns from existing data then make predictions on future events (Ge et al., 2017, Han et al., 2011, Xu et al., 2015). In process industry, machine learning techniques are widely applied using process data for process and quality monitoring, fault identification and as soft sensor (Qin, 2014). However, other valuable information to identify weak signals such as equipment failures, quality-related issues, and performance measurement (Haji‐Kazemi and Andersen, 2013, Körvers, 2004) was seldom utilized to predict process incidents. Instead, such information was commonly used for preventing occupational incidents. Table 1 summarizes the studies which applied classification algorithms to extract knowledge from historical data. Most studies in the table utilized only historical incident data for understanding causes and consequences of past incidents, rather than predicting occurrences of potential incidents. On the other hand, data of both safe operations and incidents was used by (Goh and Chua, 2013, Goh et al., 2018, Poh et al., 2018, Sarkar et al., 2018) and the studies showed promising results of predicting incident occurrences relying on potential weak signals such as safety management elements. Therefore, applications of machine learning techniques in process industries should consider a wider range of data beyond process data to recognize and respond to existing weak signals in plants.

As discussed above, “weak signal” was not precisely defined in the existing literature and techniques to observe, evaluate, and respond to weak signals were not developed for process industry. This study proposed a formal definition of weak signals in the domain of safety. After reviewing characteristics and challenges of identifying weak signals, we developed a framework providing techniques to overcome the challenges throughout the life cycle of weak signals.

The rest of the paper is organized as follows. Section 2 details the development process of the framework. Section 3 demonstrates applications of the framework by a case study of a hypothetical batch polymerization process. Performances of the machine learning techniques in the framework are discussed. Additionally, how the framework can be applied in operations is demonstrated by an example instance. Finally, conclusions and future directions of the topic are addressed in Section 4.

Section snippets

Framework overview

This sub-section provides an overview of the framework while the techniques involved in the framework will be detailed later in the section. Fig. 1 compares the life cycle of weak signals and the framework side by side, and the techniques that address challenges during each stage of the life cycle are listed correspondingly. To ensure information regarding potential sources of weak signals is collected by an organization, the framework starts from utilizing a system-based technique, i.e.,

Application in a batch polymerization process

The industrial-scale Methyl Methacrylic (MMA) polymerization process studied in Yu et al. (2020) was adopted in the current study. The framework to identify weak signals leading to potential temperature excursions was demonstrated through the case study. The batch process involves a field operator (FO) and a control room operator (CRO). In an 8-hour shift, CRO enters the recipe of the batch on the control panel, authorizes the solvent feeding process, starts the agitator, and authorizes the

Conclusions and future work

Proactive incident prevention requires the identification of weak signals and appropriate responses to the weak signals. The study developed a framework based on FRAM and machine learning techniques, which provides explicit guidance for the entire life cycle of weak signals from identifying to responding. The case study of a hypothetical batch process showed great potentials for applying the framework in real operations. Given the information of the potential weak signals that were extracted

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

The research presented in this paper was supported by the Mary Kay O′Connor Process Safety Center at the Texas A&M University. The data generation and the development of machine learning models were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing (HPRC). The consulting support provided by the HPRC staff is gratefully acknowledged. Additionally, the research has been dedicated to the memory of Dr. M. Sam Mannan, who initiated the research

References (98)

  • M. Yu et al.

    Development of a FRAM-based framework to identify hazards in a complex system

    J. Loss Prev. Process Ind.

    (2020)

  • V. Venkatasubramanian et al.

    A review of process fault detection and diagnosis: Part I: Quantitative model-based methods

    Comput. Chem. Eng.

    (2003)

  • V. Venkatasubramanian et al.

    A review of process fault detection and diagnosis: Part III: Process history based methods

    Comput. Chem. Eng.

    (2003)

  • V. Venkatasubramanian et al.

    A review of process fault detection and diagnosis: Part II: Qualitative models and search strategies

    Comput. Chem. Eng.

    (2003)

  • A.J.-P. Tixier et al.

    Application of machine learning to construction injury prediction

    Autom. Constr.

    (2016)

  • J.H. Saleh et al.

    Accident precursors, near misses, and warning signs: Critical review and formal definitions within the framework of discrete event systems

    Reliab. Eng. Syst. Saf.

    (2013)

  • T. Rivas et al.

    Explaining and predicting workplace accidents using data-mining techniques

    Reliab. Eng. Syst. Saf.

    (2011)

  • B. Rasmussen et al.

    Hazard identification based on plant functional modelling

    Reliab. Eng. Syst. Saf.

    (1997)

  • C.Q.X. Poh et al.

    Safety leading indicators for construction sites: A machine learning approach

    Autom. Constr.

    (2018)

  • R. Patriarca et al.

    Framing the FRAM: A literature review on the functional resonance analysis method

    Saf. Sci.

    (2020)

  • K. Øien et al.

    Building safety indicators: Part 1–theoretical foundation

    Saf. Sci.

    (2011)

  • O. Nicolaidou et al.

    The use of weak signals in occupational safety and health: An investigation

    Saf. Sci.

    (2021)

  • D.M. Nhat et al.

    Data-driven Bayesian network model for early kick detection in industrial drilling process

    Process Saf. Environ. Prot.

    (2020)

  • G. Mistikoglu et al.

    Decision tree analysis of construction fall accidents involving roofers

    Expert Syst. Appl.

    (2015)

  • N. Leveson

    A new accident model for engineering safer systems

    Saf. Sci.

    (2004)

  • R. Koivisto et al.

    Technol. Forecast. Soc. Change

    (2016)

  • K. Kim et al.

    On-line estimation and control of polymerization reactors Dynamics and Control of Chemical Reactors

    Distillation Columns and Batch Processes

    (1993)

  • P. Kannan et al.

    A web-based collection and analysis of process safety incidents

    J. Loss Prev. Process Ind.

    (2016)

  • K. Kang et al.

    Predicting types of occupational accidents at construction sites in Korea using random forest model

    Saf. Sci.

    (2019)

  • M. Holopainen et al.

    Weak signals: Ansoff today

    Futures

    (2012)

  • S.Z. Halim et al.

    A journey to excellence in process safety management

    J. Loss Prev. Process Ind.

    (2018)

  • S. Guggari et al.

    Non-sequential partitioning approaches to decision tree classifier

    Future Comput. Inform. J.

    (2018)

  • Y.M. Goh et al.

    Factors influencing unsafe behaviors: a supervised learning approach

    Accid. Anal. Prev.

    (2018)

  • R. Fazai et al.

    Online reduced kernel PLS combined with GLRT for fault detection in chemical systems

    Process Saf. Environ. Prot.

    (2019)

  • L. Drupsteen et al.

    Saf. Sci.

    (2015)

  • G. Douzas et al.

    Improving imbalanced learning through a heuristic oversampling method based on k-means and SMOTE

    Inf. Sci.

    (2018)

  • T.J. Crowley et al.

    On-line monitoring and control of a batch polymerization reactor

    J. Process Control

    (1996)

  • R. Chylla et al.

    Temperature control of semibatch polymerization reactors

    Comput. Chem. Eng.

    (1993)

  • I. Cameron et al.

    Process hazard analysis, hazard identification and scenario definition: Are the conventional tools sufficient, or should and can we do much better?

    Process Saf. Environ. Prot.

    (2017)

  • M. Bevilacqua et al.

    Industrial and occupational ergonomics in the petrochemical process industry: a regression trees approach

    Accid. Anal. Prev.

    (2008)

  • N. Barakat et al.

    Rule extraction from support vector machines: a review

    Neurocomputing

    (2010)

  • R. Arunthavanathan et al.

    An analysis of process fault diagnosis methods from safety perspectives

    Comput. Chem. Eng.

    (2021)

  • M. Alauddin et al.

    A bibliometric review and analysis of data-driven fault detection and diagnosis methods for process systems

    Ind. Eng. Chem. Res.

    (2018)

  • T. Aluja-Banet et al.

    Stability and scalability in decision trees

    Comput. Stat.

    (2003)

  • Ansoff, I., McDonnell, E. 1990. Implanting corporate strategy. Hemel...
  • Augasta, M.G.,Kathirvalavakumar, T. 2012. Rule extraction from neural networks—A comparative study. Paper presented at...
  • B. Baesens et al.

    Using neural network rule extraction and decision tables for credit-risk evaluation

    Manag. Sci.

    (2003)

  • Bastani, O., Kim, C.,Bastani, H. 2017. Interpreting blackbox models via model extraction. arXiv preprint...
  • J. Bergstra et al.

    Random search for hyper-parameter optimization

    J. Mach. Learn. Res.

    (2012)

  • Boström, H. 2008. Calibrating random forests. Paper presented at the 2008 Seventh International Conference on Machine...
  • L. Breiman

    Random forests

    Mach. Learn.

    (2001)

  • A. Brizon et al.

    Int. J. Emerg. Manag.

    (2009)

  • Carroll, J.S. 2004. Knowledge management in high-hazard industries. Accident precursor analysis and management:...
  • Caruana, R., Karampatziakis, N.,Yessenalina, A. 2008. An empirical evaluation of supervised learning in high...
  • Caruana, R.,Niculescu-Mizil, A. 2006. An empirical comparison of supervised learning algorithms. Paper presented at the...
  • Chawla, N.V.,Cieslak, D.A. 2006. Evaluating probability estimates from decision trees. Paper presented at the American...
  • C. Chen et al.

    Using Random Forest to Learn Imbalanced Data

    (2004)

  • H. Cheng et al.

    Rebooting Kernel CCA method for nonlinear quality-relevant fault detection in process industries

    Process Saf. Environ. Prot.

    (2021)

  • Coffman, B. 1997. Weak signal research, part I: Introduction. Journal of Transition Management,...
  • Navigate DownView more references

    Cited by (2)

    • A combined real-time intelligent fire detection and forecasting approach through cameras based on computer vision method

      2022, Process Safety and Environmental Protection

      Show abstractNavigate Down

      Fire is one of the most common hazards in the process industry. Until today, most fire alarms have had very limited functionality. Normally, only a simple alarm is triggered without any specific information about the fire circumstances provided, not to mention fire forecasting. In this paper, a combined real-time intelligent fire detection and forecasting approach through cameras is discussed with extracting and predicting fire development characteristics. Three parameters (fire spread position, fire spread speed and flame width) are used to characterize the fire development. Two neural networks are established, i.e., the Region-Convolutional Neural Network (RCNN) for fire characteristic extraction through fire detection and the Residual Network (ResNet) for fire forecasting. By designing 12 sets of cable fire experiments with different fire developing conditions, the accuracies of fire parameters extraction and forecasting are evaluated. Results show that the mean relative error (MRE) of extraction by RCNN for the three parameters are around 4–13%, 6–20% and 11–37%, respectively. Meanwhile, the MRE of forecasting by ResNet for the three parameters are around 4–13%, 11–33% and 12–48%, respectively. It confirms that the proposed approach can provide a feasible solution for quantifying fire development and improve industrial fire safety, e.g., forecasting the fire development trends, assessing the severity of accidents, estimating the accident losses in real time and guiding the fire fighting and rescue tactics.

    • A dynamic human-factor risk model to analyze safety in sociotechnical systems

      2022, Process Safety and Environmental Protection

      Show abstractNavigate Down

      The performance of sociotechnical elements varies owing to a wide range of endogenous and exogenous influencing factors. These are called uncoupled variability as per Safety-II. The uncoupled variability has drawn rare attention, despite its vital importance in major accidents analysis as per Safety-I and Safety-II paradigms. Accordingly, as the first attempt, this study proposes a systematic model to analyze performance variability in human, organizational, and technology-oriented functions caused by various variability shaping factors (VSFs). The model contains three main phases. First, a FRAM (Functional Resonance Analysis Method) - driven Human-Organization-Technology Taxonomy is developed. Subsequently, Dempster - Shafer Evidence theory is employed to elicit knowledge under epistemic uncertainty. The proposed causation model is integrated into Dynamic Bayesian Networks to support decision-making under aleatory uncertainty. Finally, a criticality matrix is developed to evaluate the performance of the system functions to support decision-making. The proposed model is built considering the advanced canonical probabilistic approaches (e.g., Noisy Max and Leaky models) that address the critical challenges of incomplete and imprecise data. The proposed dynamic model would help better understand, analyze, and improve the safety performance of complex sociotechnical systems.

    • Research article

      Adiabatic kinetics calculations considering pressure data

      Process Safety and Environmental Protection, Volume 158, 2022, pp. 374-381

      Show abstractNavigate Down

      Pressure and temperature play an important role in the adiabatic decomposition of hazardous materials. In this article, Accelerating Rate Calorimeter (ARC) was used to study adiabatic decomposition of 40 wt% dicumyl peroxide (DCP) solution,20 wt% di-tert-butyl peroxide (DTBP) solution and 2,4-dinitrotoluene (2,4-DNT). Based on the temperature and pressure signals, the kinetic constants of 40 wt% DCP and20 wt% DTBP were calculated separately. It is found that the temperature and gas production kinetics of the 40 wt% DCP solution are in good agreement with each other, but the temperature and gas production kinetics of 20 wt% DTBP solution are significantly different. The adiabatic decomposition of 2,4-DNT under different pressure conditions was experimentally investigated. The experimental results indicated that pressure promoted the adiabatic decomposition of 2,4-DNT. Accordingly, new kinetic equations were developed to describe the adiabatic decomposition of 2,4-DNT under different pressure conditions. These works provide a new insight into adiabatic kinetics calculations.

    • Research article

      Predictions and uncertainty quantification of the loading induced by deflagration events on surrounding structures

      Process Safety and Environmental Protection, Volume 158, 2022, pp. 445-460

      Show abstractNavigate Down

      The threat of accidental hydrocarbon explosions is of major concern to industrial operations; in particular, there is a need for design tools to assess and quantify the effects of potential deflagration events. Here we present a design methodology based on analytical models that allow assessing the loading and structural response of objects exposed to pressure waves generated by deflagration events. The models allow determining: i) the importance of Fluid-Structure Interaction (FSI) effects; ii) the transient pressure histories on box-like or circular cylindrical objects, including the effects of pressure clearing; iii) the dynamic response of structural components that can be idealised as fully clamped beams. We illustrate by three case studies the complete design methodology and validate the analytical models by comparing their predictions to those of detailed CFD and FE simulations. We employ the validated analytical models to perform Monte Carlo analyses to quantify, for box-like structures, how the uncertainty in input design variables propagates through to the expected maximum force and impulse. We present this information in the form of non-dimensional uncertainty maps.

    • Research article

      Effects of temperature shocks on the formation and characteristics of soluble microbial products in an aerobic activated sludge system

      Process Safety and Environmental Protection, Volume 158, 2022, pp. 231-241

      Show abstractNavigate Down

      The present study investigated the impacts of temperature shocks on the generation of soluble microbial products (SMPs) in an aerobic activated sludge system, focusing on biopolymers and low molecular weight (LMW) substances that significantly impact the effluent quality. The results indicated that raising the temperature increased SMP production. At 25 and 35 °C, all size fractions of SMPs increased linearly with the aeration time, and biopolymers comprised a large proportion of SMPs. At 5 and 15 °C, in contrast, only biopolymers increased linearly with the aeration time, and LMW substances were the predominant fraction of SMPs. The reduced bio-utilization of SMPs with an increase in temperature was associated with the decreased relative abundance of LMW substances, which is supported by the assimilable organic carbon bioassay measurements. The mass of the biopolymers for the SMPs and extracellular polymeric substances (EPS) was balanced at all temperatures, wherein a negative correlation was observed, indicating that increased SMPs in the water phase led to a decrease in EPS. The results of the reactive oxygen species (ROS) and toxicity assays confirmed that the immune defense reaction of the bacteria (induced by ROS) was the key factor for variations in LMW substances in the SMPs and EPS under temperature stresses.

    • Research article

      Numerical study on the mechanism of air leakage in drainage boreholes: A fully coupled gas-air flow model considering elastic-plastic deformation of coal and its validation

      Process Safety and Environmental Protection, Volume 158, 2022, pp. 134-145

      Show abstractNavigate Down

      Air leakage caused by mining-induced fractures around the borehole and roadway greatly affects the effect of underground gas drainage. Therefore, the study of the air leakage model considering three-dimensional stress and coal elastic-plastic deformation is of great significance to prevent air leakage. In this work, a model of air leakage outside borehole in mining-disturbed coal seam including a fully coupled gas-air flow and coal mechanics model and a mining-induced damage permeability model was developed and verified. The model was used to study the gas-air migration law and air leakage mechanism during gas drainage, and the influence of key parameters including initial permeability and sealing depth on gas drainage effect was analyzed. The results show: (1) The model can be used to characterize the air leakage of pre-drainage boreholes in mining-disturbed coal seams. (2) The gas and air pressure in the severe mining disturbance area rapidly decreased and increased in a short time. (3) Increasing the initial permeability and sealing depth will promote the gas flow in the borehole in the early stage, but the gas concentration will be reduced. The research results provide a scientific theoretical basis for improving the gas drainage effect and ensuring mining safety.

    • Research article

      Experimental study on tilting behavior and blow out of dual tandem jet flames under cross wind

      Process Safety and Environmental Protection, Volume 158, 2022, pp. 1-9

      Show abstractNavigate Down

      Fire hazards in outdoor environmental conditions cause huge damage to process safety in fuel transportation and pipeline leakage. Few literatures are reported about the combustion behaviors and instability characteristics of dual jet flames under cross wind, which are important to risk assessment and manufacturing design. In this work, a series of propane fire source with nozzle diameters of 3 mm, 4 mm and 5 mm are designed for different nozzle separation distances under cross wind. Results showed the tilt angle of rear jet flame was smaller than front jet flame due to the vitiated effective wind towards the rear jet flame by the blockage of the front jet flame. Furthermore, it’s found that the blow-out cross wind velocity of the rear jet flame is smaller than that of the corresponding single jet flame. The combustion of the front jet flame will restrict the rear jet flame to a certain extent. Finally, dimensional analysis and correlations are established, where the blow-out cross wind velocity of rear jet flame as well as that of the single jet flame can be well represented by Froude number. These new observations and correlation are helpful for further understanding combustion behaviors of dual tandem jet flames under cross wind.

    • Research article

      Experimental study of flame spread transition from chemistry to heat transfer controlled regime at sub-atmospheric pressure: The effect of sample width

      Process Safety and Environmental Protection, Volume 158, 2022, pp. 221-230

      Show abstractNavigate Down

      Understanding the flame spread rate (FSR) is controlled by either chemistry or heat transfer is important for industrial process safety. In this study, we study the effect of width on flame spread transition from chemical to thermal regime at sub-atmospheric pressure using thin paper sample with widths from 10 mm to 90 mm. Results show that the transition boundary can be identified using not only reported FSR but also the flame image or radiation, as each of them has significantly different characteristics in different regimes. We find that the orientation significantly affects the flame spread in the thermal regime but has negligible influence in chemical regime. A width dependent characteristic Damkohler number coupled lateral heat and mass transfer has been developed to analyze this transition. The Damkohler number increases as the width increases only if the width is narrow. When the width is higher than a critical value, its effect on the Damkohler number can be neglected. For this reason, the transition pressure first shifts from 25kPa (10 mm and 20 mm width) to 20kPa (30 mm width), and then keeps unchanged at 15kPa (width>50 mm). This work strengthens our understanding on the effect of width on fire risks in aircraft.

      What type of healthcare organization uses measures found in healthcare effectiveness data and information set?

      The Healthcare Effectiveness Data and Information Set (HEDIS) is a widely used set of performance measures in the managed care industry, developed and maintained by the National Committee for Quality Assurance (NCQA).

      What is an example of a quantitative improvement tool?

      Surveys are both a quantitative and qualitative improvement tool. A team in the hospital registration department is conducting a Lean project to reduce wasteful steps in the process of preregistering elective admissions.

      What is a measure of the performance potential of a process?

      Process metrics are measurements used to track the performance of a business process. They are like key performance indicators (KPIs) in that they measure how a task performs and if it's meeting the defined goals.

      What are the three primary quality management activities?

      The Juran Trilogy consists of three primary managerial processes to manage quality within an organization – quality planning, quality control and quality improvement.