A defensible, tier-ranked view of where Nigeria's hospitals are most exposed to outbreak surges and least prepared to respond — built to inform AIMS deployment sequencing.
Versionv1.3 — Q2 2026 Released 26 April 2026
Prepared forFederal Ministry of Health U.S. Department of State
Prepared byImakandi Health · JHU Lumenix
Executive Summary
61 hospitals, ranked by unmet need.
The Nigeria Hospital Readiness Index ranks every hospital in the AIMS pipeline by composite outbreak risk — outbreak exposure × weakness of preparedness × scale of operation. The result: a 4-tier deployment-sequencing framework that puts AIMS where it adds the most marginal value.
Critical
9
High exposure, weak readiness — first-priority deployment.
High
15
Significant gap between exposure and readiness.
Standard
21
Moderate gap; standard rollout sequencing.
Lower Priority
16
Strong readiness or low exposure; lower marginal need.
What NHRI is. A composite ranking that combines NCDC outbreak data with hospital-specific readiness signals (site visits where available, proxy features otherwise) and surge-capacity indicators. Every score traces back to documented inputs — see Methodology.
Top finding. Critical-tier risk concentrates in two clusters: greater Lagos (6 hospitals) and Kano (3 hospitals). These 9 facilities sit at the intersection of high outbreak burden and middling preparedness. They should anchor the first deployment wave.
What we recommend. Tiered rollout with per-tier response protocols. Critical hospitals receive accelerated deployment plus dedicated JHU clinical liaison. Roadmap also flags Phase-2 priorities: live NCDC integration, counterfactual modeling, and AIMS-as-surveillance-infrastructure.
FINDING 01
Risk concentrates geographically.
All 9 Critical-tier hospitals sit in Lagos or Kano — two states where multi-disease outbreak burden meets variable readiness.
FINDING 02
Federal teaching hospitals are well-prepared.
UBTH, UNTH, ESUTH and similar federal facilities rank Standard or Lower despite being in hot regions — proxy readiness offsets exposure.
FINDING 03
OAK Hospital validates the framework.
The only directly-measured hospital with weak IPC ranked #4 Critical — confirming the methodology surfaces real gaps, not just statistical artifacts.
FINDING 04
56 of 61 hospitals are scored on proxy data.
Phase 2 priority: convert proxy-readiness scores to measured by completing the remote-assessment campaign already in motion.
How to read NHRI. Critical-tier ≠ "outbreak hotspot." It means "where AIMS adds the most marginal value." Hospitals in hot regions with strong existing readiness (e.g., UBTH) may rank Standard — that's the methodology working as intended.
A composite ranking from three transparent components: Exposure, Readiness, and Surge Capacity. Every score traces back to documented inputs. Critical-tier means "highest unmet need" — the intersection of high outbreak burden and weak preparedness — not absolute outbreak burden.
2.1Data sources
Six layers feed the index. Where direct measurement is unavailable, proxy features substitute — and every hospital row carries a measured or estimated flag.
🦠
NCDC disease-specific sitreps
Measles, mpox, yellow fever — state tables parsed directly. 37 states each.
Parsed
📋
NCDC weekly epi reports + research
Lassa, cholera, diphtheria, meningitis — state-level burden curated from sitrep narrative + WER Vol 14 No.52.
Curated
🏥
AIMS hospital roster
61 hospitals across the live AIMS pipeline. Beds, accreditation, CMD, location.
Live
📍
Hospital geocoordinates
OpenStreetMap Nominatim. 58 of 59 geocoded; 1 flagged for manual verification.
Parsed
🔍
Site visit measurements
5 hospitals — LASUTH, OAK, Lakeshore, Atlantis, Lagos Island Maternity. Direct IPC + readiness signals.
Measured (5)
📊
Proxy readiness features
For the 54 hospitals not yet visited: hospital type, beds, accreditation, state economic tier. 15% confidence dampener applied.
Estimated (54)
5 measured / 54 estimated. Phase 2 priority is converting the 54 estimated readiness scores to measured via the remote-assessment campaign already in motion. Until then, every estimated score carries a confidence flag in the data.
2.2Three-component scoring
Each hospital receives three scores on a 0–10 scale. The composite multiplies them so that all three must be elevated for a hospital to land in the Critical tier.
Component 01
Exposure
How much outbreak burden surrounds the hospital geographically.
0.6 × own-state burden 0.3 × neighboring states 0.1 × national baseline
Component 02
Readiness
How well-prepared the hospital is to handle an outbreak surge.
Raw scale — beds, accreditation, regional health-system maturity.
0.5 × bed band 0.3 × accreditation flags 0.2 × state tier
Composite Risk
Risk=Exposure×(10 − Readiness)×√Capacity
Inverting Readiness means well-prepared hospitals score lower risk — i.e., AIMS adds less marginal value there. The √ on Capacity tempers the bed-count weighting so a 700-bed hospital matters more than a 50-bed one, but not 14× more.
2.3Tier thresholds and caveats
Distribution-based tiering
Tiers are assigned by composite-risk rank, not absolute thresholds — guaranteeing a balanced rollout plan.
CriticalTop 15%8 hospitals
HighNext 25%14 hospitals
StandardNext 35%20 hospitals
Lower PriorityBottom 25%17 hospitals
Caveats
Proxy data dominates readiness. 54 of 59 readiness scores are estimated, not measured. Phase 2 closes this gap.
NCDC data is 2021-2024 vintage. Live NCDC was unavailable during build; structural state burden patterns are epidemiologically stable, but absolute counts shift year-to-year.
No per-capita normalization. Lagos's high cholera burden partly reflects population scale. Phase 2 enhancement.
Methodology rewards well-prepared facilities. A well-resourced federal hospital in an outbreak hotspot may rank Standard — that is intentional. NHRI ranks marginal need, not absolute exposure.
Section 03
What the data tells us.
Three views of the index — distribution, geography, and the full ranking. Every number traces back to data/nhri/nhri_final.csv.
3.1Tier distribution & geographic concentration
Tier distribution · n=59
59
Hospitals
Critical8 · 14%
High14 · 24%
Standard20 · 34%
Lower Priority17 · 29%
Critical Tier
Risk concentrates in Lagos & Kano.
All 9 Critical-tier hospitals sit in two states. This isn't an artifact — it reflects multi-disease outbreak burden meeting variable readiness in two of Nigeria's most populous catchments.
3.2Where is overall outbreak burden worst — and what's driving it?
Top 10 states by combined cases across 7 tracked diseases. Bar length = total burden; segments = disease mix. The northern states dominate, but for different reasons — Borno is hit hardest by measles + cholera, Kano by diphtheria, Yobe by meningitis.
Footnote: the chart shows absolute case counts. Smaller-volume diseases like Mpox (~780 national) and Yellow Fever (~1,600 national) appear visually small here but matter clinically — see the per-disease cards below for their state-level patterns.
3.3What disease is each region hot for?
For each of the 7 tracked diseases, the top 5 states by case count. Each card carries a regional pattern label — useful for explaining the geographic concentration to non-epidemiologist audiences.
How to read this together. The first chart says "Borno, Kano, and Yobe carry the bulk of national outbreak burden." The disease cards explain why — Borno is hit by measles + cholera, Kano by diphtheria, Yobe by meningitis. These are the classic Nigerian outbreak patterns: the meningitis belt in the north, the diphtheria vaccination-gap states, the southwest forest belt for Lassa. The Exposure score in NHRI inherits this structure.
3.4Full ranked hospital list
All 61 hospitals, ranked by composite risk. Click any column header to sort. Filter by tier with the buttons below. Prefer to explore visually? Jump to the interactive map ↓
Rank
Hospital
State
Tier
Exp
Read
Cap
Risk
Top driver
Data
3.5Key findings
1. Risk concentrates geographically. All eight Critical-tier hospitals sit in just two states: Lagos (5) and Kano (3). The donut chart and the Critical-list show this clearly — not because the methodology biased toward population centers, but because both states carry high multi-disease burden (Kano in diphtheria + meningitis; Lagos in mpox + cholera) while their hospital readiness is mixed. The top of the heatmap reinforces this: Borno, Kano, Yobe, and Katsina dominate cumulative case counts.
2. Federal teaching hospitals appear lower than expected. UBTH (Edo, in the Lassa belt) ranks #26 — Standard tier. UNTH and OAUTHC also land Lower Priority. This is the methodology working as designed: the formula Risk = Exposure × (10 − Readiness) × √Capacity rewards well-prepared facilities by lowering their marginal need score. A federal TH with strong readiness can absorb an outbreak surge without AIMS adding much; a smaller hospital in the same state cannot. NHRI flags where AIMS adds the most marginal value, not where outbreaks are largest.
3. OAK Hospital validates the framework. OAK is one of only five hospitals with directly-measured readiness data (from a 20 April 2026 site visit that documented weak IPC, congested layout, and only one wash station per floor). The methodology placed it at #4 Critical — independent confirmation that the score surfaces real, on-the-ground gaps. If proxy scoring placed OAK in Standard or Lower while measured scoring placed it Critical, that would be a methodology failure. Both agree.
4. The proxy-data caveat we cannot un-flag. Of 61 readiness scores, 56 are estimated from proxy features (hospital type, beds, accreditation) — not measured. The methodology applies a 15% confidence dampener to proxy scores, but a federal TH with hidden IPC weakness could still be misranked. Phase 2 closes this gap: convert the 56 estimated scores to measured via the remote-assessment campaign already in motion. Until then, every estimated row in the ranked table carries an EST tag.
From measurement to mitigation
How AIMS closes the readiness gap.
NHRI tells you where the gaps are. AIMS is how they close.
The five-factor outbreak-readiness scorecard measures where Nigerian hospitals lose ground when an outbreak hits. AIMS sensors target four of those five directly — with measurable, real-time signal on the dimensions that matter most when seconds-to-isolation determine secondary cases.
IPC strength · 25%
Continuous IPC monitoring.
Sensors track handwashing compliance, surface contamination, and PPE adherence. Invisible breaches become real-time alerts — closing the gap that drives weak IPC tier scoring at sites like OAK Hospital.
Isolation availability · 20%
Functioning isolation, not just rooms.
Tracks isolation-room utilization, flow breaches, and patient movement — converting "we have isolation rooms" into "we have functioning isolation discipline."
Surge headroom · 20%
Minute-level surge telemetry.
Real-time bed, staff, and supply data replaces weekly reports — surfaces saturation risk hours before a ward overflows, enabling proactive surge decisions.
Clinical protocol adherence · 20%
Drift detection, not drift discovery.
Verifies whether donning order, isolation timing, and sample-handling protocols are actually followed — surfaces drift before it compounds into outbreak escalation.
The fifth factor — staff training depth (15%) — gets indirect lift: sensor data feeds the JHU-led clinical training program, turning every shift into a teaching dataset.
This is why OAK Hospital — flagged Critical (#4) by NHRI with measured weak IPC and limited surge headroom — is exactly the deployment profile AIMS is designed for. The methodology doesn't just locate risk; it identifies where AIMS delivers the most measurable readiness uplift per dollar.
Section 04
The 24 hospitals that anchor the first deployment waves.
The 9 Critical-tier hospitals are detailed below — each gets its own card showing what's driving its rank. The 15 High-tier hospitals follow as a compact list. Standard and Lower-Priority hospitals appear in the full ranked table in §3.4.
All 9 Critical-tier hospitals sit in Lagos (6) or Kano (3). Each card shows the three component scores, what's driving the rank, and whether the underlying readiness data is measured or estimated.
Significant gap between exposure and readiness, but less acute than Critical. Edo state's 4 hospitals concentrate here — they sit in the Lassa belt with mid-range readiness scores.
High Tier
14 hospitals · ranked by composite risk
Validation case · OAK Hospital, Lagos
Of the 9 Critical-tier hospitals, OAK is the only one with directly-measured readiness data — assessed on a 20 April 2026 site visit. The visit documented weak IPC (one wash station per floor), congested layout, and unreliable internet. The methodology placed OAK at #4 Critical, independent of any proxy assumptions. That alignment between measured ground truth and the algorithmic ranking is the strongest single signal that NHRI surfaces real, on-the-ground gaps — not statistical artifacts. Once Phase 2 converts the other 56 hospitals from estimated to measured, every Critical card in this section will carry the same evidentiary weight.
Section 05
What to do, tier by tier.
A four-wave deployment sequence with concrete cadences, clinical liaison structures, and reporting expectations. Each tier gets a different operational posture — accelerated for Critical, regional-shared for High, centralized for Standard, self-service for Lower Priority.
5.1Deployment sequencing
Program kickoffDay 240 · ~8 months
WAVE 1 · 9 Critical
WAVE 2 · 15 High
WAVE 3 · 21 Standard
WAVE 4 · 16 Lower
Day 0Day 60Day 120Day 180Day 240
Wave boundaries are not deadlines — they are the latest each tier should onboard. Critical hospitals can start as early as Day 0 (program kickoff). Adjacent waves overlap to keep the JHU clinical-liaison team utilized continuously.
5.2Per-tier playbook
Five operational dimensions per tier: cadence (when), clinical liaison (who), training (how often), reporting (to whom), escalation (when this tier triggers a response). Cadences below are grounded in WHO IPC programmatic norms and pandemic-preparedness practice.
Critical
8 hospitals
Maximum marginal value · accelerated everything.
📅
Cadence
Wave 1 · deployment within 60 days of program kickoff. Site preparation can begin Day 0.
🩺
Clinical liaison
Dedicated JHU clinical liaison per hospital. On-site presence during first 30 days post-deployment, then hybrid.
🎓
Training
Weekly training calls for first 90 days; biweekly thereafter. Mandatory IPC refresher every 6 months.
📊
Reporting
Weekly status to Federal MoH; monthly to U.S. State Department; quarterly to the AIMS steering committee.
🚨
Escalation
Automatic escalation on any sensor anomaly cluster. Pre-defined response playbook per top-driver disease.
High
14 hospitals
Significant gap · regional support model.
📅
Cadence
Wave 2 · deployment in days 60–120. Site preparation begins as Wave 1 hospitals hit operational steady state.
🩺
Clinical liaison
Regional liaison shared across 3–4 hospitals. Monthly site visits per hospital; remote support otherwise.
🎓
Training
Biweekly training for first 90 days; monthly thereafter. IPC refresher every 6 months.
📊
Reporting
Biweekly status to MoH state-level focal point; monthly rolled up to federal.
🚨
Escalation
Escalates on confirmed outbreak in catchment, or sensor anomalies sustained > 24 hours.
Standard
20 hospitals
Moderate gap · centralized support.
📅
Cadence
Wave 3 · deployment in days 120–180. Standard install protocol.
🩺
Clinical liaison
Centralized JHU support team; remote office hours; site visits by exception.
🎓
Training
Monthly training calls; on-demand video library. Annual IPC refresher.
📊
Reporting
Monthly status to state-level focal points; quarterly federal rollup.
🚨
Escalation
Escalates on multi-day patient deterioration patterns or recurring sensor anomalies.
Lower Priority
17 hospitals
Strong readiness or low exposure · self-service.
📅
Cadence
Wave 4 · deployment in days 180–240. Coordinated with annual budget cycles.
🩺
Clinical liaison
Self-service tier. Documentation portal + monthly office hours; named JHU contact for escalations.
🎓
Training
Quarterly training; on-demand library is primary. Annual IPC refresher.
📊
Reporting
Quarterly status to state-level focal points; semi-annual federal rollup.
🚨
Escalation
Escalation by exception only. Auto-promote to Standard tier if Phase-2 reassessment shifts the tier.
Tier mobility. Tiers are not static. Quarterly NHRI refreshes will move hospitals between tiers as outbreak conditions and readiness change. A Lower-Priority hospital that experiences a Lassa surge in its catchment can promote to High mid-program; a Critical hospital that completes a major IPC remediation can demote to Standard. The deployment sequence accommodates this — the playbook escalation rules are also the demotion rules.
Section 06
What gets built next, and why.
NHRI v1.3 is a defensible starting point — but the methodology has known limits, and the operating model has known gaps. These five Phase 2 initiatives close them. They are sequenced so each unlocks the next: live data feeds the remote-assessment campaign, which feeds the counterfactual model, which feeds the NCDC integration proposal.
01
Live NCDC data integration
Data Foundation
Replace the 2021–2024 curated state burden with live weekly pulls from NCDC, restoring real-time outbreak signal. If the public NCDC site remains intermittent, build a SORMAS-API connector — Nigeria's primary digital surveillance platform — through formal MoH partnership.
Impact. Exposure scores update weekly instead of cumulatively; tier changes reflect current epidemiology.
Target
Q3 2026
Depends on
NCDC partnership · MoH MOU
02
Remote-assessment completion
Readiness Truth
Convert all 54 estimated readiness scores to measured by completing the digital site-assessment campaign already in motion. Each hospital submits photos, an IPC checklist, and infrastructure documentation; an AIMS reviewer scores against the same 5-factor scorecard used for visited hospitals.
Impact. Removes the proxy-data caveat from Findings §3.5 and Methodology §2.3. Every NHRI tier becomes evidentiary, not inferential.
Target
Q3 2026
Depends on
CMD email response cycle
03
Counterfactual outbreak modeling
Narrative Power
Model the 2018 Lassa outbreak in Edo / Ondo and the 2022 diphtheria surge in Kano: "if AIMS sensor monitoring had been deployed at these hospitals, how would the mortality and hospital-stay curves have shifted?" Carefully scoped — uses literature-derived sepsis-management mortality reductions, not speculation.
Impact. Quantifies the value of AIMS in lives saved and hospital-days reduced — the most powerful single artifact for U.S. State Department appropriations review.
Target
Q4 2026
Depends on
Initiative 01 (live data)
04
AIMS → NCDC surveillance integration
National Infrastructure
Sensor anomaly clusters from deployed AIMS hospitals feed back into NCDC's outbreak surveillance system as an early-warning signal. A hospital that sees a sustained vital-sign anomaly cluster — particularly fever + tachypnea + tachycardia — flags as a potential outbreak early indicator before lab-confirmed case clusters.
Impact. Repositions AIMS from "imported product" to "Nigerian pandemic-preparedness infrastructure." Secures long-term federal sponsorship and unlocks national-scale-up funding paths.
Target
Q1 2027
Depends on
Wave 1 + 2 deployed · NCDC partnership
05
Population-normalized exposure
Methodological Refinement
Add a secondary Exposure score normalized by state population. Current exposure scores partially reflect catchment size — Lagos's high cholera count is amplified by 25 million residents. A per-capita view distinguishes "lots of people, lots of cases" from "small population, high attack rate."
Impact. A second view of Critical-tier ranking that may surface smaller, harder-hit catchments otherwise masked by the absolute-burden methodology.
Target
Q4 2026
Depends on
2023 NPC population estimates
Companion · Live, interactive
Explore the readiness map.
State fill (purple) shows outbreak burden. Pin color shows hospital readiness tier (red Critical · amber High · blue Standard · green Lower). Filter by tier, switch disease layer, or run a pressure analysis to see how tiers shift under simulated surge — all without leaving the page.
Best viewed at full screen — zoom, pan, and detail panels work above.
Open in full screen
↗