By Sean Gotora, 2024 | Strategy, Risk & Resilience | MBA-Strategy, BSc.Eng. Chemical Engineering., CRMA, SIRM, CERM, CBCP, RMP, CAPM.
In over 16 years as a risk practitioner, spanning both technical and management roles, I have seen, and at times supported organisations recover from weak methodologies, poorly applied frameworks, and immature techniques with relatively modest effort. These deficiencies, while limiting, are visible and correctable. What I have not seen is an organisation recover from a negative risk culture without a fundamental overhaul. Where truth is suppressed, filtered, or reshaped to fit a narrative, no framework is strong enough to compensate. The system does not fail because it lacks structure; it fails because it no longer allows itself to see clearly.
Risk management rarely fails because of missing frameworks, weak methodologies, or immature tools. It seems counterintuitive; however, even when these elements are well established, risk management will still fail when organisations systematically filter truth, the truth of risk as it moves toward power (top management). This is especially pronounced in large hierarchical organisations, where, by the time risk reaches executives and boards, it has often been softened, diluted, or reframed into something safe enough to ignore as the lens of subjectivity changes with each successive layer of bureaucracy.
Across industries, geographies, and maturity levels, a similar pattern repeats. Risk functions exist, frameworks and methodologies exist, regardless of complexity, risk registers are populated, dashboards are produced, committees convene; on the surface, all the right things are being done accordingly, the assessment was conducted correctly, and in some cases, the Risk Maturity has been assessed independently and found to be advanced or leading. Yet strategic failures persist, value destruction and project collapses still occur with remarkable predictability, even when the risk assessment was technically sound. Such negative outcomes are routinely described as unforeseeable, yet they rarely are.
The flaw is not technical; it is structural and cultural, and it intensifies within large hierarchies.
When Risk Becomes Subservient, Failure Is Inevitable
Risk management collapses the moment it becomes dependent on management comfort. In theory, the model is clear: Effective risk management requires 1st Line ownership, 2nd line provides challenge and assurance of risk management, & 3rd line provides independent assurance. In practice, both 1st & 2nd line operate within the same unspoken cultural context: “do not destabilise the narrative” as the distinction often becomes blurred. Risk professionals learn quickly and precisely which messages are rewarded and which are punished, which messages advance careers and which ones stall them. Over time, the organisation selects for reassurance over accuracy, and consistency over truth: the objective truth and actionable risk-based insights.
The result of reassurance and self-preservation is that dashboards turn green, emerging risks become “monitoring items”, structural weaknesses are reframed as temporary execution issues, nothing is technically false, but nothing is fully true. Risk does not disappear; it goes underground. This is nothing more than self-deception rather than effective governance.
Fear: The Most Undeclared Risk Driver
In organisations where dissent carries career consequences, risk communication becomes performative. The cost of going against the grain/narrative is both visible and personal, and the cost of being silent is distributed and invisible, i.e. it is less painful to remain silent. Risk professionals, as rational actors, adapt accordingly; they do not need explicit instruction to self-censor, as the executive reactions, informal signals, exclusion from forums, and stalled progression do the work. Once fear enters the system, objectivity exits, and it becomes increasingly difficult for even the best risk manager with the very best framework, tools and such to effectively do their job.
This dynamic is especially destructive in large, complex, capital-intensive, politically visible initiatives. In entities where ambition, political capital, and reputational stakes are high, the tolerance for bad news collapses and requests to reframe the narrative are received at each successive level of the hierarchy. Risks that threaten delivery, cost, or strategic narrative are suppressed until they can no longer be denied. By the time the “truth” gets to the decision makers, time and resources have depleted, and decision options have vanished. These failures are not caused by an absence of risk identification, but by the absence of permission to speak honestly.
The Aircraft Analogy: How Organisations Crash While Flying Confidently
Risk management is to leadership what instrumentation is to a pilot. Direction, speed, altitude and safety depend on accurate signals about threats, constraints, and system conditions. Pilots do not rely on optimism; they rely on instruments that report objective reality. Before any flight, operational systems are checked and verified. During the flight, decisions are continuously adjusted based on engine performance, weather radar, communications with ground control, and predictive systems such as the Terrain Awareness and Warning System (TAWS), which replaced the reactive Ground Proximity Warning System (GPWS).
These instruments are not designed to preserve the confidence of the pilot or maintain the narrative that everything is fine. They are designed to prevent catastrophe.
Now invert that logic and imagine the radar tuned to downplay storm severity or the proximity of another aircraft, terrain warnings are softened into ambiguity; “you are still above the ground” instead of issuing a clear “pull up”, or engine anomalies are suppressed until failure is unavoidable and all displays remain reassuringly “green”. The aircraft does not crash because of pilot incompetence; it crashes because the system failed to tell the truth and denied the pilot the opportunity to make the correct decision at the right time.
Executives and boards are the pilots of the complex organisational aircraft and risk management is the instrumentation layer, integrating internal signals and external intelligence. Its function is not reassurance; it is signal fidelity. It must indicate both stability and deviation with equal precision. If instability is masked in the name of avoiding alarm, leadership is not managing risk it is navigating blind.
The question is not whether signals exist. The question is how those signals are processed, filtered, and translated into action.
Hierarchy as a PID Controller That Filters Risk
A PID controller is a feedback mechanism used to keep a system stable by adjusting outputs based on deviation from a target state. It continuously measures error and applies three concurrent responses: proportional, integral, and derivative. When tuned well, it stabilises systems. When tuned poorly, it suppresses critical signals and creates hidden fragility.
Each component performs a distinct function. Proportional control reacts to the size of the error in the present moment, providing immediate correction but never fully eliminating deviation. Integral control accumulates past error and forces correction over time, ensuring that persistent deviation is removed but introducing the risk of bias from historical conditions. Derivative control responds to the rate of change, anticipating future states and dampening volatility, but often at the cost of slowing necessary escalation.
In an ideal configuration, these three components operate in parallel, each contributing independently to a combined control action that balances speed, accuracy, and stability.
Large organisations exhibit the same structure but often with poor tuning. Proportional behaviour dampens strong risk signals immediately; the more disruptive the message, the faster it is softened to maintain perceived stability. Integral behaviour embeds historical bias; prior false alarms reduce sensitivity to new warnings, even when the underlying conditions have materially changed. Derivative behaviour suppresses rapid escalation; fast-moving risks are slowed through layered reviews, alignment cycles, and governance choreography until response windows narrow or disappear entirely. The result is not stability. It is delayed recognition, muted escalation, and systemic exposure masked as control.
The system is not optimised for truth fidelity. It is optimised for narrative stability, executive confidence, and reporting for the sake of reporting. Leadership like the pilot who is unaware he/she has faulty instrumentation, believes it has situational awareness, when in reality, it is flying at risk due to lagging instruments.
Executive Enablement: Where Truth Is Trained to Fail
If the organisation behaves like a poorly tuned control system, it is because its leaders have calibrated it that way, whether consciously or not. Executives are not passive recipients of risk signals; they are the primary filter through which those signals are interpreted, reshaped, or suppressed.
When risk is received as criticism rather than input, when messengers are subtly penalised through exclusion, tone, or slowed progression, and when alignment is prioritised over accuracy, the organisation adapts quickly. It learns that truth is conditional. It learns that escalation carries cost and over time, signals are no longer distorted by process alone; they are pre-filtered at source by the people generating them.
Most executives will state a preference for transparency, but that preference often holds only within boundaries so long as it does not disrupt momentum, challenge authority, or introduce reputational exposure. Within those constraints, risk ceases to function as decision support. It becomes a validation mechanism, reinforcing confidence rather than informing judgment. What presents as alignment is, in practice, narrative control.
The Board’s Silent Complicity
This dynamic does not stop at the executive layer; it is completed at the board. Boards that accept curated narratives without challenge effectively ratify the filtering system beneath them. When oversight relies exclusively on executive-framed reporting, when independent risk voices are not heard directly, and when confidence is mistaken for control, governance shifts from scrutiny to endorsement.
Effective boards operate differently. They do not seek comfort in risk discussions; they expect friction. They understand that well-articulated risk should disrupt assumptions, expose trade-offs, and sharpen the context in which decisions are made. Discomfort, in this setting, is not a signal of dysfunction but of engagement with reality.
The absence of discomfort is the stronger warning. It suggests that signals have already been moderated before reaching the board, that the system presents coherence rather than truth. Executives often interpret tension as misalignment or inefficiency, yet in practice, it is frequently the point at which the organisation is forced to confront what could go wrong. Without that confrontation, decisions are made on incomplete information, regardless of how confident they appear.
Truth Under Pressure: The Only Basis for Trust in Risk
Within this system, the role of the risk function is routinely misunderstood. It is not to align with management, reinforce prevailing narratives, or maintain organisational comfort. Its role is to preserve signal integrity under pressure, particularly when that pressure is highest.
Trust, in this context, does not emerge from agreement or alignment. It is built through consistency, specifically, the demonstrated willingness to present inconvenient, unpopular, or costly truths without dilution. Leadership only begins to rely on risk when it recognises that the message will remain stable regardless of circumstance, hierarchy, or consequence. Until then, risk is tolerated, not trusted.
That trust is inherently asymmetric. The risk function must be prepared to lose favour before it gains reliance. If it calibrates itself to management comfort, it may secure access and short-term acceptance, but it does so by eroding its own credibility. At that point, it ceases to function as a governance mechanism and becomes a reporting utility producing outputs that are easy to consume but no longer reliable for decision-making.
This is where the structural reality becomes unavoidable. The limitation is not technical. Frameworks, models, and methodologies are largely sufficient. The failure lies in how the system treats truth.
Risk management must be both structurally and culturally protected to report objective reality without fear of retaliation, marginalisation, or career consequences. Without that protection, every control dynamic described earlier will drift toward suppression, regardless of design intent. Signals will be softened, escalation delayed, and uncertainty reframed into reassurance as information moves up the hierarchy.
Once that substitution takes hold, when comfort replaces accuracy, the outcome is no longer uncertain. Signals weaken, response windows narrow, and exposure accumulates beneath a surface of apparent control. At that point, failure is not a risk to be managed. It is an eventuality governed only by timing.
Weak methodology produces visible, diagnosable failure, misaligned assessments, poor prioritisation, and gaps in coverage. These deficiencies are technical; they surface quickly and can be corrected through discipline, frameworks, and capability. Truth degradation operates differently. It produces invisible, systemic failure: outputs that appear coherent and controlled, yet are directionally wrong, masking the accumulation of exposure beneath a surface of alignment.
The distinction is fundamental. Weak methodology distorts how risk is analysed; truth degradation distorts what risk is allowed to be seen. Once the signal itself is compromised upstream, no methodology, regardless of sophistication, can recover it downstream. The system is operating on corrupted input.
That is why the fatal flaw in risk management is not methodological weakness; It is truth degradation within hierarchy.
This website uses cookies to ensure you get the best experience on our website.
Read our Privacy Statement & Cookie Policy