Apocalypse Clock Methodology: From Evidence-Based Parameters to Dynamic-Cascade Critical Horizons
A technical explanation of how the present clock converts evidence-based source-map parameters into model-derived critical-horizon distributions.
The Apocalypse Clock is a quantitative systemic-risk horizon model. Its purpose is to translate documented threat indicators, calibrated uncertainty ranges, dependency pathways and scenario assumptions into comparable estimates of when interacting global risks may approach critical systemic pressure. The clock is therefore not a decorative metaphor placed on top of an opinion. It is the visible output of a computational pipeline whose internal steps are named, inspectable and mathematically constrained.
The current implementation evaluates 23 systemic threats across three domains: civilizational, biospheric and technological. Each threat is represented by six MCDA dimensions, an annualized effective systemic risk-growth parameter and a destabilization threshold. These eight parameters are loaded through a flat source-map schema, where every entry has the form threat.metric and carries a central estimate, an uncertainty interval and source metadata.
The model proceeds in layers. First, source-map parameters define the threat state. Second, scenario conditioning shifts scores, domain multipliers and growth rates. Third, MCDA scoring computes base pressure. Fourth, dependency amplification and domain normalization transform base pressure into adjusted priority. Fifth, a process-specific horizon model estimates threat-level threshold crossing. Sixth, Monte Carlo sampling repeats this calculation under uncertainty. Seventh, four structural aggregation algorithms convert threat-level horizons into system-level crossing distributions. Finally, the live calculation console and scientific panel render diagnostic checks that explain sensitivity, concentration, network influence and tail behavior.
This paper describes the mechanism of the current clock, not the older stored-horizon design used by earlier methodology files. The older files are used here only as a visual reference for paper layout, collapsible sections, mathematical exposition, dark and paper modes and compact mobile presentation.
Synchronization note. This methodology version is synchronized with Apocalypse Clock v1.2.6, DATASET_VERSION = v1.2.6, and the primary bundled source map data_v1_7_1metadata_revision.json.
Model class
Quantitative systemic-risk horizon model with MCDA scoring, process-specific thresholds, dependency amplification, Monte Carlo sampling and diagnostic sensitivity layers.
Data object
23 threats across civilization, biosphere and technology, with 8 parameters per threat and 184 non-meta source-map entries.
Headline rule
The large highlighted year is produced by the dynamic-cascade aggregation rule, using the P90 edge of the cascade crossing distribution.
The central modelling problem is comparability. The threats in the clock differ in mechanism, evidence base, time scale and causal pathway. Climate degradation, nuclear escalation, antimicrobial resistance, AI-enabled destabilization and financial contagion cannot be measured by one direct empirical unit. A methodology that simply lists them would be descriptively rich but analytically weak. The clock solves this by placing each threat into a shared risk grammar while still preserving threat-specific process types.
The model asks the same structured questions of every threat. What is the possible scale of harm? How urgent is the present pressure? Is the driver accelerating? How strongly is the threat coupled to other threats? How irreversible would severe damage be? How likely is governance failure? These questions create a multi-dimensional representation of systemic pressure. The resulting score is not meant to erase the uniqueness of each threat. It creates a common coordinate system so that heterogenous threats can be compared without pretending they are physically identical.
A second problem is interaction. In systemic risk, a threat is not important only because of its direct severity. It is also important because of what it can trigger, weaken or accelerate. A water crisis can affect food systems and geopolitical stability. Cyber disruption can damage supply chains and financial systems. Governance fragmentation can reduce the ability to respond to nearly every other threat. The clock therefore treats dependency links as part of the calculation rather than as background commentary.
A third problem is uncertainty. The model does not collapse uncertainty into a single apparently precise date. It samples input ranges, recomputes threat horizons and displays distributions. P10, P50 and P90 are therefore not decorative labels. They are quantiles of the model distribution generated under explicit assumptions.
Comparability
MCDA translates heterogeneous threats into a common analytical scale.
Interaction
Dependency amplification and dynamic cascade logic make network position computationally relevant.
Uncertainty
Monte Carlo sampling converts source ranges into visible distributions, CDFs and quantile horizons.
The current clock uses a sequential pipeline. The order is important because each stage uses the output of the preceding stage. Source-map values define threat parameters. Scenario conditioning modifies those parameters. MCDA scoring turns dimensions into base pressure. Dependency amplification modifies that pressure according to network exposure. Domain weighting changes relative emphasis among civilizational, biospheric and technological layers. Growth and threshold logic then convert adjusted pressure into a crossing horizon. Monte Carlo simulation repeats the entire path under sampled parameter uncertainty.
The headline date is not read from the JSON file. It is not stored as a hand-entered year. It is produced after the model has sampled threat dimensions, growth rates and thresholds, computed process-specific threat horizons and applied the dynamic-cascade aggregation rule. The headline year is the P90 of that dynamic-cascade distribution. Other horizons are displayed because the clock intentionally exposes structural alternatives rather than hiding them.
This distinction is methodologically decisive. The source map supplies assumptions about present pressure, growth and threshold. The dashboard code supplies the transformation from those assumptions to horizon distributions. The result is therefore a model-derived critical horizon, grounded in source-based inputs and controlled by transparent computational assumptions.
The methodological meaning of the clock lies in this chain. A reader should be able to trace every displayed year back through the chain: quantile, aggregation rule, threat horizon, priority score, source-map parameter and scenario assumption.
The source map is intentionally flat. Every parameter is addressed by a single string key with the form threat.metric. This avoids ambiguity in import, export and audit. A flat key such as climate.scale or ai.threshold can be validated, displayed, exported and averaged without needing to infer nested structure. It also makes it easier to compare outputs from different AI source-map generators, because every file must provide the same 184 parameter entries.
Each entry contains mu, lo and hi. The central value mu is the deterministic reference value. The lo and hi values define a bounded uncertainty interval. Source, URL, access date, strength and note fields document why that interval exists. The model therefore separates three roles: the numerical role of the parameter, the evidentiary role of the source and the uncertainty role of the interval.
The schema distinguishes three numerical scales. The six MCDA dimensions are ordinal systemic-pressure scores. The growth-rate field is an annualized effective risk-growth proxy. The threshold field is a normalized destabilization threshold. These fields should not be interpreted as if they had the same physical unit. They are placed in the same JSON object because the model needs all of them, not because they mean the same thing.
MetricMeaningRole
scalePotential systemic magnitude of harm if the threat intensifies.Ordinal systemic-pressure dimension.
urgencyNear-term priority and present-decade pressure.Ordinal systemic-pressure dimension.
accelerationRate at which drivers, exposure, capability or impacts are worsening.Ordinal systemic-pressure dimension.
interdependenceDegree of coupling to other systemic threats.Ordinal systemic-pressure dimension and network vulnerability input.
irreversibilityPersistence of damage and difficulty of recovery.Ordinal systemic-pressure dimension.
gov_failureRisk of institutional, legal, political or coordination failure.Ordinal systemic-pressure dimension and regime/cascade vulnerability input.
growth_rateEffective annual systemic risk-growth proxy.Positive parameter sampled on log-normal support, capped at runtime.
thresholdNormalized destabilization threshold.Threshold parameter used to compute threat-level crossing horizon.
Threat count
23
Metrics per threat
8
Non-meta entries
184
The current source map does not merely attach bibliography to numbers. It uses source metadata as part of the uncertainty discipline of the model. Strong entries are expected to have closer evidentiary support, while moderate, weak, anchored-judgment and expert-judgment entries disclose a wider inferential gap between the source and the numeric parameter.
Evidence-source policy. The Apocalypse Clock source-map layer is calibrated primarily from high-quality scientific, governmental, intergovernmental, and institutional sources. Market proxies are used only when explicitly marked as anchored judgment. Preferred sources include peer-reviewed scientific literature, major assessment reports, official statistical datasets, and authoritative institutional publications from bodies such as the IPCC, WHO, WMO, IPBES, FAO, UNEP, UNESCO, the World Bank, SIPRI, UNHCR, V-Dem, ENISA, OECD, IMF, IEA, and comparable evidence-producing organizations. Popular media, unsourced claims, speculative commentary, and non-verifiable estimates are not used as primary calibration sources. Where direct empirical measurements do not exist, parameter values are treated as evidence-anchored modelling judgments and must be documented through source notes, uncertainty intervals, and evidence-strength labels.
AI-assistance limitation. The model does not treat AI-generated values as evidence. AI systems may assist in structuring, comparing, or auditing the source map, but every numerical parameter must remain anchored to traceable scientific, governmental, intergovernmental, or institutional evidence.
Citation and source-link policy. The methodology separates three citation layers. Mathematical and modelling procedures are supported by methodological references in the main text and reference section. Threat-level numerical calibration is documented in the active source-map parameter registry, where each parameter records its central estimate, uncertainty interval, evidence strength, source note and source link. The complete source-link registry then lists all unique external URLs used by the active source map, together with access dates and the parameter keys for which each source is used. This separation keeps the paper readable while preserving auditability.
This distinction is necessary because systemic-risk modelling often combines different epistemic objects. Some inputs are connected to measured trends. Others are derived from scientific assessments. Others use institutional reports or calibrated judgment because no direct observable can measure the desired abstract dimension. The methodology should not hide this mixture. It should display it and let uncertainty propagate.
The dashboard validates the active source data before calculation. It checks finite values, required fields, threat IDs, metric names and the ordering rule lo <= mu <= hi. If an uploaded source map is used, the same key grammar is preserved. If an evidence overlay is uploaded, it is applied by precision-weighted pooling rather than by replacing the entire parameter system indiscriminately.
Evidence-strength counts
StrengthEntries
anchored_judgment15
expert_judgment15
moderate92
strong61
weak1
Source-type heuristic counts
TypeEntries
scientific or institutional assessment77
other dataset, industry or report69
peer-reviewed or journal-linked20
anchored judgment or model convention18
Frequent source domains
DomainEntries
imf.org9
iea.blob.core.windows.net8
sipri.org8
ipbes.net7
enisa.europa.eu6
fao.org5
gov.uk5
oecd.org5
reliefweb.int5
thelancet.com5
carnegieendowment.org4
doi.org4
ipcc.ch4
securityconference.org4
demtech.oii.ox.ac.uk3
esa.int3
icrc.org3
lpi.worldbank.org3
nature.com3
pubmed.ncbi.nlm.nih.gov3
science.org3
unccd.int3
Current dataset revision. The clock now embeds data_v1_7_1metadata_revision.json. The v1.7 revision made limited score-transparency updates: 12 numerical score or interval changes, 16 evidence-strength relabels, model-anchor labeling for all 23 threshold entries and capped-hi metadata on 13 growth-rate entries. The current bundled primary map is the metadata-revision export dated 2026-05-06 UTC; that cleanup layer was metadata only: no non-meta mu, lo, hi values and no source URLs changed from v1.7.
The current clock evaluates 23 threats. The IDs shown below are not display names only. They are computational identifiers used by dependency links, scenario multipliers, parameter keys and data-validation rules. This matters because methodological integrity depends on the same threat being referenced consistently across the source map, the live dashboard and the scoring methodology page.
The three domains serve two functions. They organize interpretation, and they provide a normalized weighting layer. A user can emphasize biospheric, technological or civilizational domains, but the clock normalizes the selected domain weights so that changing emphasis does not simply inflate the whole system. Domain weights shift relative priorities and threat horizons, not the meaning of the underlying source-map parameters.
The process class determines how a threat moves toward its threshold. Continuous threats use an exponential pressure growth model. Event threats use a non-homogeneous Poisson first-arrival model. Regime threats use a logistic transition and geometric waiting-time model. This is why the current clock is more advanced than a uniform stored-date system: it recognizes that ecological degradation, nuclear escalation and institutional lock-in are not the same temporal process.
#IDThreatDomainProcessEvidenceDependencies
1climateClimate Breakdownbiospherecontinuousstrongwater, soils, geopolitics, oceans
2biodiversityBiodiversity Lossbiospherecontinuousstrongclimate, soils, oceans
3soilsSoil & Food Systembiospherecontinuousstrongclimate, water
4waterFreshwater Stressbiospherecontinuousstrongclimate, soils, geopolitics
5oceansOcean Degradationbiospherecontinuousstrongclimate, biodiversity
6pollutionToxic Pollution & PFASbiospherecontinuousmoderateoceans, biodiversity
7pandemicsPandemic & Biosecuritycivilizationeventstrongamr, geopolitics, supply
8amrAntimicrobial Resistancecivilizationcontinuousstrongpandemics, geopolitics
9bioengineeredEngineered Biological Eventcivilizationeventmoderateai, geopolitics, pandemics
10nuclearNuclear Conflictcivilizationeventmoderategeopolitics, ai, autonomousw
11supplyEnergy & Supply Chainscivilizationeventstronggeopolitics, climate, cyber, minerals
12geopoliticsGeopolitical Escalationcivilizationeventstrongnuclear, supply, ai, pandemics
13fragmentation_govGlobal Governance Fragmentationcivilizationregimemoderategeopolitics, authoritarian, epistemic
14economicEconomic Fracturecivilizationregimemoderategeopolitics, supply, cyber
15debtDebt / Financial Contagioncivilizationcontinuousmoderateeconomic, geopolitics, supply
16displacementMass Displacementcivilizationcontinuousmoderateclimate, geopolitics, soils, water
17authoritarianAuthoritarian Driftcivilizationcontinuousmoderategeopolitics, epistemic, economic
18epistemicEpistemic Breakdowncivilizationcontinuousmoderateai, authoritarian
19aiAdvanced AI Destabilizertechnologyregimemoderatecyber, geopolitics, bioengineered, supply
20cyberSystemic Cyberattackstechnologyeventstrongai, geopolitics, supply
21autonomouswAutonomous Weapons Escalationtechnologyregimeweakai, geopolitics, nuclear
22mineralsCritical Minerals Bottlenecktechnologycontinuousmoderategeopolitics, supply
23spaceSpace Infrastructure Disruptiontechnologyeventweakgeopolitics, cyber, ai
The base score is the first quantitative compression of the threat state. It uses six dimensions, each weighted explicitly. The six dimensions are not redundant. Scale captures possible magnitude. Urgency captures time pressure. Acceleration captures worsening. Interdependence captures coupling. Irreversibility captures persistence. Governance failure captures the likelihood that institutional response is inadequate. Together they approximate present systemic pressure before network amplification.
The model uses MCDA because there is no natural single physical unit for all global risks. MCDA is appropriate when distinct criteria must be compared transparently rather than merged through an opaque judgment. The weights are visible and adjustable, which means a reader can see exactly how a different normative emphasis changes the result.
The base score remains intentionally separate from growth and threshold. A severe threat may have slow growth, a fast-growing threat may have moderate current pressure and a low-threshold threat may cross earlier even with a lower score. The model keeps these concepts separate so that the final horizon is produced by dynamics, not only by rank.
DimensionDefault weight
scale0.2
urgency0.18
acceleration0.14
interdependence0.18
irreversibility0.16
gov_failure0.14
A scenario is a structured transformation applied before scoring. It changes dimension shifts, domain multipliers, growth multipliers, threat-specific stress multipliers and uncertainty width. This means a scenario is not a visual theme. It is a mathematical change to the state of the model. A high-cooperation scenario reduces selected stressors and narrows uncertainty. A polycrisis scenario increases pressure and uncertainty. A technology acceleration scenario concentrates pressure in the technological domain. A fragmentation scenario increases civilizational stress and governance-linked pathways.
Scenario conditioning is necessary because systemic risk is conditional. The same source-map baseline can evolve differently under different geopolitical, technological and governance environments. The model therefore lets the user ask counterfactual questions while preserving the same underlying threat architecture.
The scenario is applied before Monte Carlo sampling. This ordering matters. The model first shifts the central threat ranges and growth conditions, then samples uncertainty around the scenario-adjusted state. The output is therefore a distribution conditional on the selected scenario, not an unconditional universal probability.
ScenarioMathematical meaning
BaselineNo dimension shift, neutral domain multipliers, neutral growth multipliers and standard uncertainty width.
High CooperationReduces pressure and growth in key domains, especially conflict, governance fragmentation and nuclear risk.
Polycrisis CascadeRaises dimensions, growth multipliers and uncertainty width across interacting civilizational and biospheric risks.
Tech AccelerationConcentrates acceleration and growth pressure in AI, cyber, autonomous weapons, space and epistemic disruption.
FragmentationRaises civilizational stress, conflict coupling, governance failure and displacement pathways.
Domain weighting allows the user to shift emphasis among civilizational, biospheric and technological domains. The method does not simply add raw weight on top of all threats. It first normalizes the three selected domain weights so that the total emphasis remains controlled. The multiplier for a domain is its normalized share multiplied by the number of domains.
This normalization prevents a common modelling error. Without normalization, raising every domain slider would increase the whole system even if the relative priorities stayed the same. The current approach treats the domain sliders as relative emphasis controls. A higher biosphere setting means biospheric threats receive greater relative weight compared with other domains, not that the total model scale is arbitrarily inflated.
Domain weighting affects the priority used for horizon calculations. The dynamic-cascade mass denominator, however, is based on base scores rather than domain-weighted priority. This design avoids a paradox where increasing a domain weight inflates inactive denominator mass and can delay the cascade. In the cascade trigger, domain weights alter when threats cross, not how much inactive mass they contribute to the cascade load.
Dependency amplification converts a threat from an isolated score into a network-aware priority. A threat receives additional weight when its direct dependency neighborhood is itself salient. The code computes neighbor quality from scale, urgency and interdependence, then combines that with a topology factor derived from the number of direct dependencies relative to the network mean.
The rationale is systemic. A threat that is connected to important threats can have consequences beyond its own base score. For example, geopolitical escalation can affect nuclear risk, supply chains, pandemics and AI governance. Climate pressure can affect water, soil, oceans and geopolitics. A purely isolated score would miss this amplification pathway.
The dependency amplification is bounded. It is not allowed to grow without limit. This protects the model from making a highly connected threat infinitely important and keeps the effect interpretable as a network multiplier rather than as a replacement for the MCDA score.
A threshold is the pressure level at which the model treats a threat as crossing into a destabilizing state. It is not the same thing as a severity score and it is not a probability. It is the denominator against which current adjusted priority is compared through the horizon model. The distance between priority and threshold is one of the main determinants of crossing time.
The source-map threshold entries are documented on a normalized 0 to 10 destabilization scale. The current dashboard then applies an operational anchored policy. Missing or invalid thresholds fall back to the common value 8.5. Valid per-threat thresholds are constrained between 7.8 and 9.2. The user explicitly asked that any such clamp not remain hidden; therefore the methodology states it openly as an operational constraint of the current dashboard.
This policy has a practical purpose. It allows per-threat thresholds to express meaningful differences without allowing threshold values to dominate the model silently. If thresholds were unconstrained, one arbitrary low value could pull a threat unrealistically close to crossing. If all thresholds were forced to one global value, genuine threat-level differences would disappear. The anchored policy is a compromise between calibration and comparability.
The current operational clamp is not a hidden empirical claim. It is a dashboard constraint designed to stabilize cross-threat comparison. The source map still records threshold values as normalized destabilization thresholds with their own source metadata.
The growth-rate field is one of the most important safeguards in the current clock. Many observable indicators grow faster than systemic collapse risk should be allowed to grow inside a horizon model. AI compute, cyber incident counts, orbital objects or market forecasts can increase rapidly, but treating raw indicator growth as direct collapse-risk growth would produce distorted horizons. The current model therefore distinguishes raw indicator growth from effective systemic risk growth.
The embedded metadata-revision dataset stores calibrated effective risk-growth values in growth_rate.mu, growth_rate.lo and growth_rate.hi. For the bundled file these values must not be reconverted: entry metadata uses effective_growth_calibrated and do_not_reconvert_mu_lo_hi safeguards, while structured growth_kind fields and _meta.growth_rate_calibration_classes document the original proxy class and threat-specific cap. Historical v1.5 labels inside notes are archival cleanup records, not active conversion rules.
This is methodologically important because it lets the model use strong empirical trends without confusing trend growth with system-failure growth. The growth_rate is not a direct measurement of collapse probability. It is the annualized pressure-growth term that moves a threat from present adjusted priority toward its destabilization threshold.
Runtime cap policy. For the bundled calibrated source map, growth_rate.mu, growth_rate.lo and growth_rate.hi already contain effective systemic risk-growth proxies and must not be reconverted. The 0.08 ceiling is only a global hard safety ceiling. Raw-mode conversion must respect the entry-level threat_specific_cap or the cap declared in _meta.growth_rate_calibration_classes, using the stricter value when both are present.
Growth kind classDefault conversion in code
effective_risk_growth1
direct_risk_proxy1
event_frequency_growth0.7
expert_mapping0.6
indicator_growth0.35
market_growth0.25
capability_growth0.15
The current clock does not force every threat into one horizon equation. It recognizes three temporal mechanisms. Continuous processes accumulate gradually. Event processes occur as first-arrival events with a time-varying hazard. Regime processes transition from one systemic state to another. This distinction is essential because a drought-driven food-system stressor, a nuclear escalation event and a governance-lock-in process do not have the same mathematical behavior.
For continuous threats, the model uses exponential growth of adjusted pressure toward a threshold. For event threats, it uses a non-homogeneous Poisson first-arrival mechanism where annual hazard grows through time. For regime threats, it uses a logistic transformation of pressure relative to threshold into annual transition probability and then a geometric waiting-time logic.
This process-specific design is one of the reasons the present clock is more methodologically advanced than the previous methodology. A stored-horizon model can display a date, but it cannot explain why an event-type threat should behave differently from a continuous biospheric pressure or a regime-transition process. The current clock can.
Continuous
Exponential pressure growth toward a threshold.
Event
Non-homogeneous Poisson first-arrival hazard.
Regime
Logistic transition probability and geometric waiting time.
Continuous threats are those whose destabilizing pressure accumulates gradually. Climate breakdown, biodiversity loss, soil degradation, freshwater stress and antimicrobial resistance are examples of this logic. The model computes a priority score and then asks when that score, growing at the effective risk-growth rate, reaches the destabilization threshold.
The formula is simple but powerful. If priority already exceeds the threshold, the crossing year is the present year. If growth is non-positive, no crossing is returned within the model window. Otherwise the first crossing time is solved analytically from the exponential growth equation.
This model is appropriate when the relevant question is cumulative pressure rather than sudden first arrival. It does not claim that every impact accelerates smoothly in the real world. It supplies a transparent mathematical approximation for risks whose systemic pressure is better represented as gradual accumulation than as a discrete event.
Event threats are represented as first-arrival processes. Nuclear conflict, pandemic emergence, engineered biological events, cyber systemic shocks, supply-chain shocks and space-infrastructure disruption are not continuous pressure curves in the same sense as biospheric degradation. They are better represented as events whose annual arrival hazard can change through time.
The dashboard uses a non-homogeneous Poisson-style hazard. The initial hazard is proportional to the ratio between current priority and threshold, multiplied by effective growth. The annual hazard then grows at the effective growth rate. In stochastic Monte Carlo runs, the model draws a random uniform value and returns the first year where cumulative arrival probability crosses that draw. In deterministic mode, it returns the median first-arrival year by solving for cumulative hazard approximately equal to ln 2.
The event model matters because it prevents abrupt threats from being incorrectly treated as slow continuous degradation. It also prevents event threats from being assigned a fixed date independent of current pressure, growth and threshold.
Regime threats are processes where the key risk is transition into a new systemic state. Examples include advanced AI destabilization, global governance fragmentation, economic fracture and autonomous weapons escalation. The question is not only how pressure accumulates, but whether pressure relative to threshold produces a state transition.
The dashboard converts pressure minus threshold into an annual transition probability through a logistic function. If priority is far below threshold, the transition probability is small. As priority approaches and exceeds threshold, the transition probability rises. Stochastic runs sample the waiting time from a geometric process. Deterministic mode uses the median waiting time.
This design gives the clock a mechanism for institutional or technological lock-in processes. It allows a threat to remain stable for some period and then transition rapidly once pressure crosses a critical zone.
The Monte Carlo engine is the mechanism that turns source-map ranges into output distributions. In each run, the model samples every threat dimension, growth rate and threshold from its uncertainty range. It then recomputes base scores, dependency amplification, domain weighting, effective growth, threat horizons and system-level aggregation years. The run is one stochastic future under the selected scenario. Thousands of runs create a distribution of crossing years.
Ordinal MCDA dimensions are sampled on bounded support using a fitted beta distribution when possible, with a normal fallback. Positive parameters such as growth_rate and threshold are sampled from log-normal ranges. This distinction is mathematically appropriate: ordinal dimensions are bounded scores, while growth rates and thresholds are positive-support quantities.
The dashboard default visible example in the screenshot uses 3,000 runs, although the control can change the number. More runs improve smoothness and stability but take longer in the browser. The methodology therefore separates the main run from the optional scientific panel so that the headline remains usable while deeper diagnostics remain available on demand.
After each Monte Carlo run produces crossing years, the dashboard sorts those years and builds a cumulative distribution function. The CDF answers a model-internal question: what share of simulated runs have crossed by a given year? P10, P50 and P90 are quantiles of this distribution. P10 is the earlier 10 percent crossing edge, P50 is the median and P90 is the 90 percent crossing edge.
The clock uses these quantiles because a single date would hide uncertainty. In a systemic-risk model, uncertainty is not a minor inconvenience. It is part of the object being studied. A narrow interval means the current assumptions push the model toward a stable horizon. A wide interval means the result is sensitive to input ranges, structural choices or tail behavior.
The model also computes bootstrap summaries. It resamples the sorted crossing years 160 times, builds CDF bands and estimates parameter-level uncertainty from the interquartile range of bootstrap medians. Sampling sigma measures the spread of the actual Monte Carlo crossing-year distribution. Structural sigma measures disagreement among aggregation rules.
Runtime window. The synchronized dashboard version uses NOW = 2026, YS = 2025, YE = 2100 and displays horizons beyond that simulation axis as >2100. A label such as No crossing ≤ 2100 is therefore a display-window statement, not a claim that a threat cannot cross after 2100.
Once every threat has a sampled horizon, the model still needs a rule for deciding when the system as a whole has crossed a critical state. The dashboard deliberately runs four structural aggregation algorithms in parallel. This is not duplication. It exposes model-structure uncertainty. Different philosophical views of systemic risk imply different aggregation rules.
The compensatory rule assumes that enough weighted threat mass can jointly constitute systemic crossing even if no single threat alone dominates. The max-rule assumes that the earliest single critical threat can define systemic crossing. The graph-weighted Gaussian rule treats the system as a dependency-linked latent joint process. The dynamic cascade rule simulates active threats pulling dependent threats forward through network propagation.
The headline uses the dynamic-cascade P90 because the public clock is intended to emphasize systemic interaction, not only additive weighted severity. The other rules are retained because they show how sensitive the result is to structural assumptions. A methodologically honest model should not hide this comparison.
AlgorithmCore questionOutput
Compensatory aggregationWhen does enough weighted threat priority cross its individual horizons?Weighted-share crossing year.
Non-compensatory max-ruleWhen does the first critical threat cross?Earliest threat-level crossing year.
Graph-weighted aggregationWhen does the dependency-linked latent system score cross the aggregate threshold?Latent-normal graph crossing year.
Dynamic cascade propagationWhen do active threats trigger dependent threats and produce self-sustaining systemic activation?Cascade crossing year and headline distribution.
The compensatory rule asks whether enough weighted threat mass has crossed. For every year, the model sums the priority of threats whose individual horizon is at or before that year and divides by the total priority of all threats. The crossing year is the first year where this share reaches the selected aggregate threshold slider.
This rule is useful because systemic pressure can be cumulative. Many moderate crossings can jointly stress civilization, biosphere and technology even if no single threat is the sole cause. It is called compensatory because high crossing mass in one part of the system can compensate for non-crossing mass elsewhere.
The limitation is equally clear. A compensatory rule can understate single-trigger catastrophic pathways. That is why the model also includes the max-rule and dynamic cascade alternatives.
The max-rule is the structural opposite of the compensatory rule. It asks when the first threat crosses its threshold. In the code, the max-rule crossing year is the minimum of all threat-level horizons. It is not called max-rule because it takes the maximum year. It is non-compensatory because one severe early crossing can define systemic concern without waiting for weighted mass to accumulate.
This rule is useful for hazards where one pathway can dominate. Nuclear conflict, engineered biological events or a severe cyber-systemic shock may matter even if most other threats have not crossed. The max-rule therefore provides a conservative single-trigger view of the system.
The limitation is that it can be too sensitive to one early threat. It may ignore broad background deterioration. The clock displays it as one structural lens, not as the sole interpretation.
The graph-weighted aggregation rule treats threat states as latent signals coupled by the dependency matrix. For a given year, each threat receives a latent z-score depending on whether it has crossed and on its priority relative to threshold. The model then computes a dependency-linked joint score using graph-derived covariance structure and compares it with the aggregate threshold.
This rule is useful because it sits between simple addition and explicit cascade simulation. It does not simulate step-by-step propagation, but it allows dependence structure to affect system-level crossing. A threat connected to many other threats influences the joint score differently from an isolated threat with the same individual horizon.
The method is a structural approximation, not a claim that the threat network is exactly multivariate normal. Its purpose is to show whether a dependence-aware latent aggregation produces a materially different horizon from the compensatory and cascade rules.
The dynamic-cascade rule is the most important structural algorithm for the public headline. It simulates year-by-year activation. A threat becomes active when its own horizon has arrived. Then the model runs up to three propagation waves in that year. During each wave, inactive threats can be pulled forward if enough upstream dependencies are active and if their proximity and vulnerability are high enough.
The target vulnerability term uses interdependence, acceleration and governance failure. Exposure measures the active priority share among the target threat dependencies. Proximity measures how close the target is to its own horizon. The trigger score combines exposure, proximity and vulnerability. A target activates when its trigger score exceeds a vulnerability-adjusted threshold.
The cascade does not fire simply because one dependency activates. After propagation, the model requires all three domains to be active, transmitted dependency share of at least 25 percent, active base-score mass at least equal to the cascade threshold and a cascade boost of at least 0.06. This makes the headline a systemic-cascade criterion rather than a single-threat alarm.
The headline year is the P90 of the dynamic-cascade crossing distribution. P90 is used here as the upper critical-horizon edge of the cascade distribution, not as a manually chosen date. If Monte Carlo sampling changes slightly from run to run, the displayed year can shift because the stochastic distribution itself changes.
The large public year is therefore dynamic-cascade P90. The smaller displayed compensatory P10, P50 and P90 values describe a different aggregation rule and should not be confused with the headline rule.
The Live Calculation Console is not simply a loading animation. It is a procedural map of the model. Each check mark corresponds to a named computational step. This is valuable because it lets a reader distinguish baseline calculation from optional diagnostics and lets the developer detect where the model stalls or changes behavior.
The first 21 steps form the main baseline run. They include scenario conditioning, MCDA scoring, dependency amplification, domain weighting, process-specific horizon modelling, Monte Carlo simulation, the four aggregation algorithms, domain summaries, bootstrap uncertainty and four independent quantitative diagnostics. These are the calculations that make the clock result rather than merely displaying it.
The table below describes each step in terms of what it does and why it is included. This is the correct academic interpretation of the console: not as a UI list, but as a transparent computational audit trail.
StepCalculationWhat it computesWhy it is used
01Scenario conditioningApplies scenario shifts, domain multipliers, growth multipliers, threat multipliers and uncertainty-width multiplier before scoring.It makes the output conditional on an explicit world-state assumption rather than treating all futures as the same baseline.
02Base MCDA scoringComputes the weighted six-dimension base score for every threat.It creates a transparent common scale for heterogeneous threats.
03Dependency amplificationAdjusts base scores using dependency-neighbor pressure and graph topology.It lets systemic coupling influence priority.
04Normalized domain weightingNormalizes civilization, biosphere and technology weights and applies relative domain emphasis.It lets the user change emphasis without inflating the whole system arbitrarily.
05Process-specific threat horizon modelConverts each threat priority, growth rate and threshold into a continuous, event or regime horizon.It respects different temporal mechanisms across threats.
06Global Stress IndexComputes aggregate pressure relative to fixed threshold-derived system capacity.It provides a compact present-state stress indicator.
07Lead-threat priority rankingRanks threats by deterministic priority under the active scenario.It identifies current dominant drivers before stochastic sampling.
08Domain layer aggregationSummarizes horizons and pressures by civilizational, biospheric and technological layers.It shows which systemic domain is driving the model response.
09Beta and log-normal parameter samplingSamples bounded ordinal dimensions and positive growth or threshold parameters.It propagates uncertainty from source ranges.
10Monte Carlo crossing simulationRuns thousands of stochastic futures and records threshold-crossing years.It converts parameter ranges into crossing distributions.
11Compensatory aggregationFinds the year when enough weighted threat mass has crossed.It models cumulative systemic pressure.
12Non-compensatory max-rule aggregationFinds the earliest single-threat crossing.It models single-trigger systemic pathways.
13Graph-weighted aggregationComputes a dependency-linked latent joint crossing score.It tests whether graph structure changes the system horizon.
14Dynamic cascade propagationSimulates active threats triggering dependent threats year by year.It models non-linear cascade behavior and supplies the headline distribution.
15Domain crossing distributionsComputes separate crossing summaries for civilization, biosphere and technology.It reveals whether the system result is domain-concentrated or broad.
16Structural ensemble spreadMeasures disagreement among compensatory, max-rule, graph-weighted and cascade medians.It exposes model-structure uncertainty.
17Bootstrap interval estimationResamples crossing distributions to estimate quantile stability and uncertainty bands.It tests whether Monte Carlo summaries are stable under resampling.
18Weibull survival analysisComputes accelerating-hazard diagnostics from threat horizons and process properties.It tests whether threshold risk is accelerating or not.
19Network eigenvector centralityRanks recursive dependency hubs in the threat network.It finds threats that matter because they connect to other important threats.
20Poisson-binomial convergence tailComputes exact unequal-probability probabilities for many threats crossing concurrently.It quantifies multi-threat convergence without assuming identical probabilities.
21Shannon entropy risk landscapeComputes entropy, concentration and effective number of threats from priority shares.It shows whether risk is concentrated or broadly distributed.
The clock contains a specific group of four independent quantitative diagnostic algorithms: Weibull Survival Analysis, Network Eigenvector Centrality, Poisson-binomial Convergence Tail and Shannon Entropy of the Risk Landscape. These are not the same as the four structural aggregation algorithms. The aggregation algorithms produce alternative system-level horizons. The four diagnostic algorithms interrogate the shape, network structure, concurrence and concentration of the resulting risk landscape.
Their purpose is methodological triangulation. A single horizon distribution can tell the reader when the model crosses under a rule. It cannot alone tell whether hazard is accelerating, which threats are dependency hubs, whether many thresholds can converge in the same decade or whether risk is concentrated in a few dominant threats. The four independent diagnostics answer those additional questions.
Because they are mathematically distinct, agreement among them strengthens interpretability. Disagreement is also informative. A diffuse entropy result with one extreme eigenvector hub, for example, tells a different story from a concentrated entropy result with low concurrent crossing probability.
01 Weibull Survival Analysis
Builds a survival curve from each threat horizon. The shape parameter beta increases with process type, acceleration, interdependence, urgency, governance failure and evidence strength. Beta above 1 means accelerating hazard.
02 Network Eigenvector Centrality
Builds a symmetrized dependency adjacency matrix and uses power iteration to compute recursive influence. A threat is central when it is connected to other central threats.
03 Poisson-binomial Convergence Tail
Uses unequal threat crossing probabilities to compute the exact probability that at least k threats are active by a given year. This is more appropriate than a simple binomial model because threats do not share one identical probability.
04 Shannon Entropy Risk Landscape
Computes how broadly priority is distributed across threats. High entropy means diffuse risk. High concentration means a smaller set of threats dominates.
The Scientific Panel is intentionally separated from the first run. These calculations are more browser-heavy and are interpretive diagnostics rather than headline generators. They run on demand so that the core clock remains responsive. This separation also has methodological value: the panel explains and stress-tests the baseline result but does not overwrite the headline date.
The optional diagnostics investigate sensitivity, robustness, non-compensatory failure modes and tail dependence. They answer questions that the headline alone cannot answer. Which threats most move the output? Are rankings stable if MCDA weights vary? Does a single domain create a veto condition? Does correlation in extreme outcomes materially change the tail? These are scientific audit questions, not cosmetic UI additions.
A technically literate reader should therefore interpret steps 22 to 27 as a second methodological layer. The core model estimates horizons. The panel tests how those horizons behave under perturbation and alternative assumptions.
StepCalculationWhat it computesWhy it is used
22Fast OAT sensitivityPerturbs one threat at a time and measures the effect on the response target.A fast screening method for identifying influential threats.
23Sobol low-discrepancy Jansen S1/STUses Sobol A/B/A_Bi samples and Jansen estimators for first-order and total-order sensitivity.Shows direct effects and interaction effects in the response surface.
24SMAA weight robustnessSamples alternative MCDA weights and measures top-rank acceptability.Tests whether rankings are robust to plausible changes in weighting philosophy.
25Non-compensatory veto diagnosticApplies veto-style stress logic where severe domain states can dominate even when averages look moderate.Tests whether compensatory assumptions hide critical domain failures.
26Tail-dependence stress testApplies correlated systemic tail shocks and compares tail-cascade behavior.Tests whether the model is fragile under correlated extremes.
27Scientific audit summaryBuilds a compact summary of optional diagnostics, warnings and robustness signals.Turns diagnostics into an interpretable audit layer without overwriting the headline date.
The strongest way to present the clock is not to begin with defensive disclaimers. It is to state what the model does. The model converts documented input parameters and explicit computational assumptions into critical-horizon distributions. The result is meaningful because the assumptions are visible, the transformations are explicit and the output can be compared across threats, scenarios and aggregation rules.
The correct interpretation of P10, P50 and P90 is internal to the model distribution. They describe how sampled model runs behave under the selected source map, scenario, weights, thresholds, growth rates and dependency structure. They should be read as model-derived critical horizons. This is not a weakness. It is the normal epistemology of transparent modelling when direct empirical frequency data for comparable civilizational-scale cascades do not exist.
Valid scientific sources strengthen the input layer. They do not remove the need for a model layer. The model layer is precisely what transforms heterogeneous evidence into a comparable systemic horizon. A scientifically literate reader should therefore ask whether the input sources are credible, whether the parameter ranges are reasonable, whether the equations are transparent, whether uncertainty is exposed and whether diagnostics test robustness. Those are the standards by which this clock should be judged.
Recommended framing: the Apocalypse Clock is a transparent quantitative systemic-risk horizon model built from source-based parameters, explicit assumptions, dependency structure, process-specific dynamics and uncertainty sampling.
Apocalypse Clock methodology v5.6 technical academic edition. Generated from the current ApocalypseClockPC.html and Scoring methodology.html logic.