ADEK · Leadership · Overview
FILTER
01 · Sector Overview

93 low-fee schools and a 0.08-point gap that shifts a grade band.

An integrated leadership dashboard combining the low-fee private schools sector analysis with a live weighted-vs-regular scoring model simulator. Every judgment and score recalculates automatically as you adjust weights, scenarios, and evidence source configurations.

Schools in scope
93
Low-fee private schools across Abu Dhabi, Al Ain, Al Dhafra
Framework indicators
17
Across 6 Performance Standards · 71 elements
Dominant judgment
Acceptable
~47% of sector · structural, not geographic
Weighting delta
0.16
Points separating Acceptable (3.41) from Good (3.57) on identical evidence
Same school · Weighted Model

3.41 ACCEPTABLE

Classroom-proximate evidence (LO 40%, WS 30%) exposes structural weakness in instructional depth, assessment use, and curriculum adaptation. The weighted model treats not-all-evidence as not-equally-informative.

Same school · Regular Model

3.57 GOOD

Equal weighting of all evidence sources averages out structural weaknesses. Interview and document scores compensate for weak classroom evidence. The same school moves to Good.

By Fee Category

Internal gradient in the sector

By Municipality

Structural, not geographic

By Curriculum

Indian profile strongest

Strongest Pattern

Pastoral & cultural foundations are universal

2.1 Personal Development appears as a strength in 100% of Very Good, 85% of Good, 86% of Acceptable schools. This is the sector's bedrock — not its differentiator.

Weakest Pattern

Instructional core is the recurring gap

Critical thinking (3.1.5), differentiation (3.1.4), assessment (3.2.x) and leadership (6.1.2) recur at 90–100% of Acceptable schools. This is the red zone the sector is trapped in.

Methodology Signal

0.08 points · one grade band

Differential weighting of classroom-observable evidence shifts an Acceptable-verdict school into Good under equal-weight averaging. Evidence-source methodology matters as much as evidence itself.

⚖ New in v5 · Evidence Source Simulator

Live LO / WS / DOC / INT weighting

Adjust the four evidence-source weights directly — Lesson Observations, Work Samples, Documents, Interviews — and watch the overall school verdict change in real time. Six scenario presets included.

★ New in v5 · Indicator Deep-Dive Cards

17 indicator cards · element-level calculation

Click any of 17 indicator cards to see the full element-level calculation: evidence chips (LO/WS/DOC/INT), weighted vs regular math, priority flags for Very Weak elements, and band-divergence warnings.

Data scope note
This dashboard synthesises the low-fee private schools sector study (93 schools), the ADEK Inspection Weighting Model specification (17 indicators, 71 elements), and the weighted-vs-regular scoring model simulator results. True per-school filtering requires the underlying school-level datasets; the structure is ready to expand into per-school drill-downs when that data is uploaded.
02 · Sector Profile

Who are the low-fee schools?

The 93 schools break down by fee band, municipality, and curriculum. The sector is structurally diverse — but the weakness cluster that keeps them at Acceptable is consistent across almost all configurations.

Fee Distribution

Two fee tiers

BandSchools%
Very Low4144.1%
Low5255.9%
Municipality

Abu Dhabi dominant

Municipality#%
Abu Dhabi5458.1%
Al Ain3638.7%
Al Dhafra33.2%
Curriculum

MoE largest, Indian strongest

Curriculum#Good+
MoE4151%
Indian (CBSE)3080%
American1258%
British838%
Other250%
Judgment Distribution

Sector is concentrated at Acceptable

Performance by Curriculum

Indian schools lead outcomes

Structural, not geographic
The Acceptable clustering is not a municipality effect or a fee effect — it's a structural pattern. Regardless of fee band or location, schools are constrained by the same Tier 1 weakness cluster: instructional depth, assessment use, curriculum adaptation, self-evaluation accuracy, and instructional leadership.
03 · Judgment Analysis

What separates Very Good from Acceptable?

Comparative consistency across the three dominant judgment bands, showing where strengths cluster and where recurring gaps appear across the framework.

VERY GOOD · 7 schools

Deep instructional practice

  • 100% strong in 1.3 Learning Skills
  • 100% strong in 3.1 Teaching
  • 86% strong in 3.2 Assessment
  • 86% strong in 6.1 Leadership
  • Clear use of student data to differentiate
GOOD · 41 schools

Strong foundations, patchy depth

  • 95% strong in 2.1 Personal Development
  • 88% strong in 5.1 Health & Safety
  • Mixed in 3.1 — teaching solid in 60%
  • Weak in 3.2 — assessment use in only 45%
  • Leadership often transactional not strategic
ACCEPTABLE · 44 schools

Red zone is instructional

  • 90%+ weak in 3.1.4 Differentiation
  • 93% weak in 3.1.5 Critical thinking
  • 89% weak in 3.2 Assessment
  • 91% weak in 4.2 Curriculum adaptation
  • 86% weak in 6.1.2 Instructional leadership
Band Consistency

Where bands agree — and diverge

Green rows are universally strong across all three bands (low discriminating power). Red rows are the Tier 1 differentiators.

IndicatorVGGAGap
2.1 Personal Development100%95%86%14pp
5.1 Health & Safety100%88%82%18pp
1.3 Learning Skills100%62%22%78pp
3.1 Teaching100%60%14%86pp
3.2 Assessment86%45%11%75pp
4.2 Curriculum Adaptation86%40%9%77pp
6.1 Leadership86%50%14%72pp
Divergence Chart

Differentiating power per indicator

04 · Subject Diagnostics

Each subject tests a different aspect of the framework.

Subjects aren't just content areas — they're diagnostic lenses. Arabic SL reveals instructional design failure. Mathematics exposes reasoning pedagogy. Click any subject for the full diagnostic profile.

Inspection Lens by Subject

What each subject is designed to expose

SubjectDiagnostic UseCritical TestRisk
Arabic FLReveals whether learning is literal or analyticalExtended writing · standard Arabic fluencyHigh
Arabic SLExposes instructional design weaknessesCommunicative speaking vs memorised responsesVery High
EnglishTests literacy becoming deep comprehensionExtended writing · reading inferenceHigh
MathematicsDiagnoses reasoning pedagogyProblem-solving in unfamiliar contextsHigh
ScienceReveals inquiry vs recallIndependent investigation · hypothesis formationHigh
Social StudiesTests progression from facts to analysisData interpretation · cause-and-effectModerate–High
Islamic EducationDistinguishes memorisation from interpretationTajweed accuracy · evidence useModerate
05 · Phase Analysis

The secondary cliff and the primary strength.

Phase-level performance reveals where the sector concentrates strength (early phases, pastoral) and where it falls away (secondary depth of learning, curriculum choice).

Phase × Indicator Heatmap

Strength by phase

Secondary Cliff

Performance drop in secondary phase

Secondary phase performance drops sharply in 1.3, 3.1, 3.2, 4.2, 6.1 — the same Tier 1 cluster.

KG

Strongest phase

Strong in personal development, early literacy, care. Weakness in formal assessment progression.

Primary

Solid foundations

Consistent strength in 2.1, 5.1. Emerging weakness in 3.1.4 differentiation.

Middle

Transition pressure

Teaching quality variable. Assessment use declines. Critical thinking rarely explicit.

Secondary

Weakest phase

Depth-of-learning cliff. Poor subject choice, weak progression, inconsistent Arabic.

06 · Master Heatmap

The red-zone cluster keeping schools at Acceptable.

The master heatmap below shows weakness frequency across all 71 framework elements by judgment band. Red cells are Tier 1 — where intervention produces the biggest movement. Green cells are already universally strong.

WEAKNESS FREQ: <20% 20-40% 40-60% 60-80% 80%+
TIER 1 · INSTRUCTIONAL CORE

Where movement comes from

Differentiation, critical thinking, assessment use, curriculum adaptation, instructional leadership. Intervention here produces the biggest grade-band movement.

Weight ≥ 5% per element · combined ≈ 45%
TIER 2 · SYSTEM ENABLERS

Where enabling systems live

Self-evaluation accuracy, parent/community engagement, governance, curriculum design review. Needed for sustained improvement but slower to change.

Weight 3–5% per element · combined ≈ 30%
TIER 3 · STRUCTURAL BASELINE

Already strong — maintenance

Safeguarding, health & safety, personal development, compliance. Universally strong across the sector — keep, don't over-weight.

Weight ≤ 3% per element · combined ≈ 25%
07 · Very Good Thresholds

The five defining features of a Very Good school.

Only 7 schools in the sector reach Very Good. What separates them from Good-tier schools is not one indicator — it's a tightly coupled cluster of five features that appear together.

1

Pedagogical coherence

Consistent high-quality teaching across subjects and phases. Not isolated pockets — systematic. All teachers observed, not just the strongest.

2

Assessment drives teaching

Student data genuinely informs next lesson. Not just recorded. Visible in planning, differentiation, and targeted intervention.

3

Curriculum is adapted, not just delivered

The curriculum plan reflects the school's students — local context, prior attainment, cultural framing. Not a copy-paste from curriculum provider.

4

Accurate self-evaluation

The school's own judgment of itself matches inspection evidence within one grade band. Honest, granular, evidenced. Not promotional.

5

Instructional leadership, not managerial

Principal and middle leaders actively shape teaching quality. Regular observation, coaching, feedback loops. Not just compliance monitoring.

Compound effect

All five appear together

No Very Good school has four of five. The cluster is mutually reinforcing — each feature enables the next. An Acceptable school with one of these features drifts back without the others.

08 · Weighting Model · How It Works

Three levels of weighted calculation, running live.

The ADEK Inspection Weighting Model calculates judgments at three nested levels. Evidence feeds into elements. Elements feed into indicators. Indicators feed into the overall school judgment. Each layer applies weights — and override rules can cap the outcome regardless of the math.

LEVEL 1

Evidence → Element

Each element receives evidence from multiple sources (lesson observation, work scrutiny, documents, interviews, data). Sources are weighted by Group A / B / C rules depending on the element type.

element_score = Σ (source_score × source_weight) / Σ weights
LEVEL 2

Element → Indicator

Elements within an indicator are not equal. A key element (e.g. 3.1.4 Differentiation) may carry 25% of its indicator while a contextual element carries 5%. Element weights are indicator-specific.

indicator_score = Σ (element_score × element_weight)
LEVEL 3

Indicator → Overall

The 17 indicators then feed the overall school judgment, each with its own weight. Overrides (safeguarding, persistent poor teaching) can cap the outcome regardless of the aggregate score.

overall = Σ (indicator_score × indicator_weight) → apply overrides
Six Rule Systems

The engine's logic

  1. Evidence weighting — Group A/B/C assign different source weights per element type
  2. Element weighting — not all elements within an indicator are equally impactful
  3. Indicator weighting — the 17 indicators have different weights on overall judgment
  4. Override rules — safeguarding failures or persistent weak teaching cap the outcome
  5. Confidence scoring — missing evidence, single inspector, or contradictions reduce confidence
  6. Missing-evidence redistribution — when a source is absent, remaining weights rescale proportionally
Why this matters

Equal weighting hides structural weakness

In a regular equal-weight model, strong scores in document evidence (policies, plans) average out weak classroom evidence. The weighted model prioritises classroom-proximate evidence — especially lesson observations and student work scrutiny — because these are what actually demonstrate learning quality.

The same school, on the same evidence, can be Acceptable (3.49) under the weighted model and Good (3.57) under the regular model. Methodology is not neutral.

09 · Evidence Weights

Three groups. Three different weight profiles.

Not every element can be assessed in the same way. The framework defines three evidence groups based on what kind of evidence is most informative for that element type.

GROUP A · ATTAINMENT & PROGRESS

Classroom-verified attainment

  • Lesson observation — 45%
  • Work scrutiny — 35%
  • SBA / attainment data — 20%

Applies to PS1 attainment & progress indicators (1.1, 1.2). Per PDF Section 3.

GROUP B · TEACHING & LEARNING

Classroom-proximate

  • Lesson observations — 40%
  • Work samples — 30%
  • Documents — 20%
  • Interviews — 10%

Applies to PS2, PS3, PS4, PS5 learning elements — the most frequently used rule.

GROUP C · SAFEGUARDING & LEADERSHIP

Policy & systems led

  • Documents — 40%
  • Interviews — 40%
  • Observation / walks — 20%

Applies to PS5.1 Safeguarding and all PS6 leadership elements.

Why Group C elevates interview and documents
Safeguarding culture and leadership quality cannot be adequately assessed through classroom observation alone. A policy audit or a DSL interview reveals systemic issues that 20 minutes in a Year 4 classroom never could. Group C intentionally inverts the weighting to put systems evidence first.
Interactive evidence mixer

Adjust source scores (1–6) and watch the element score recompute

This mixer uses the Group B weights (40/30/20/10). Pick a score for each evidence source — the weighted element score and judgment band update live.

Weighted element score
3.80
Judgment band
GOOD
10 · ★ New · Evidence Source Simulator

Live weighted vs regular scoring simulator.

Adjust the four evidence-source weights — Lesson Observations, Work Samples, Documents, Interviews — and watch the overall school verdict recompute against the real 17-indicator, 71-element model. Six scenario presets reproduce the comparative tests from the methodology paper.

Weighted Model · Your Configuration
3.49
Acceptable
− evidence-weighted calculation
Regular Equal-Weight Model · Fixed Reference
3.57
Good
− equal-weight baseline (25/25/25/25)
Evidence Source Weights · Weighted Model

Configure how each evidence type is counted

The default reflects the PDF methodology: LO 40 · WS 30 · DOC 20 · INT 10. Try a preset or build your own. The Regular model above does not change — it is the flat 25/25/25/25 baseline.

17-Indicator Score Comparison

Weighted vs Regular at the indicator level

Divergence Per Indicator

Where the two models disagree most

Orange bars = indicator where weighted model produces a lower score than regular. These are the indicators that reveal structural weakness when evidence is weighted properly.

Live Indicator Breakdown

Every indicator score, both models, live

Indicator Weight Weighted Regular Δ Band Change
What the simulator demonstrates
Moving from Equal Weight to Observation-Heavy (60/25/10/5) pushes the school deeper into Acceptable because classroom evidence is genuinely weaker than document/interview evidence — the regular model averages this away. Moving to Interview-Heavy (20/20/20/40) pulls toward Good because interview evidence is the most generous source. The Strict Classroom-Proximity preset (50/40/5/5) is the harshest diagnostic stance and best mirrors what classroom reality looks like.
11 · ★ New · Indicator Deep-Dive

17 indicator cards · click any card for the full calculation.

Every indicator card shows its weighted score, regular score, and whether the band diverges between the two models. Cards with an orange left border are where methodology changes the verdict. Click any card for the element-level evidence breakdown with LO / WS / DOC / INT chips and priority flags.

★ Live · All 17 Indicators · Auto-Computed Overall

The full school judgment, computed live

Every indicator's weighted and regular score, multiplied by its overall weight, summing to the school total. Adjust any indicator weight slider to see the overall recompute. Click any row to dive into that indicator's full equation cascade.

WEIGHTED MODEL
Σ weights:
100.0%
Loading…
REGULAR MODEL
Σ weights (equal):
100.0%
Loading…
Loading…
Code Indicator Group Weight in overall W. score R. score W. contrib. R. contrib. Δ band
TOTAL CONTRIBUTIONS →
÷ sum of weights →

How to read this: Each row shows one indicator's live weighted and regular scores (computed from current element-level evidence values). The contribution columns show score × weight. The TOTAL row sums all contributions and divides by the sum of weights to produce the school overall. Drag any weight slider to redistribute influence and watch the overall band shift.

Priority flags
Elements marked PRIORITY are Very Weak elements (1.x, 2.x) that independently hold an indicator below Good. In particular: 3.1.4 Differentiation, 3.2.4 Assessment impact, 4.2.1 Curriculum adaptation, 5.2.4 Inclusion support. These are the indicators where weighted and regular models diverge most visibly.
12 · Element Calculator

71 sliders. Live element-level calculation.

The full element-level calculator. 17 accordions for 17 indicators. 71 element sliders (1–6 scale). Every adjustment recomputes: element → indicator → overall, with side-by-side weighted and regular scores.

Live Overall Score
4.00
GOOD

Active mode
Weighted Model
Per-Indicator Summary
13 · Model Comparison

Same school. Same evidence. Two verdicts.

A side-by-side walk-through of how the weighted and regular models diverge on the core comparative test from the methodology paper. The difference is not small edge cases — it is a full grade band on identical underlying evidence.

Weighted Model
3.49
Acceptable
Regular Equal-Weight Model
3.57
Good
Indicator Contribution

Where the 0.08 gap comes from

Two Stories

What each model is saying

Weighted says: "Classroom practice is genuinely weak in 3.1.4 (Differentiation), 3.1.5 (Critical thinking), 3.2.4 (Assessment impact), and 4.2 (Curriculum adaptation). These carry disproportionate weight because they most directly affect learning. Strong policies and interviews do not compensate."
Regular says: "Averaged across 71 elements equally, the school is doing enough: strong pastoral care, compliant safeguarding, engaged leadership interviews, and acceptable classroom practice balance out. Good."

Both are mathematically correct. Only the weighted model is diagnostically useful — it tells inspectors and leaders where to intervene.

Scenario Results

Four simulation runs from the methodology paper

ScenarioWeightedRegularΔVerdict
1 · Strong school4.294.32-0.03VERY GOOD (both)
2 · 1.1/1.2 split4.014.08-0.07GOOD (both)
3 · Core-weakness test3.493.57-0.08ACC vs GOOD
4 · Very weak school3.693.72-0.03GOOD (both)

The weighted model diverges from the regular model most sharply at the Good/Acceptable threshold — which is exactly where inspection stakes are highest. At both the top and bottom of the scale, the two models agree.

14 · Overrides & Confidence

When the calculation is not final.

The weighted model's output is subject to override rules (hard caps on certain failures) and a separate confidence score that reflects evidence quality. Both run live.

Override Rules · PDF Section 4

Four hard caps on the outcome

  1. Safeguarding element override (5.1.1): if Care, welfare & safeguarding including child protection is rated Weak or Very Weak, overall judgment cannot exceed Weak.
  2. Safeguarding indicator override (5.1): if Health & Safety overall is rated Weak, the overall judgment cannot exceed Weak.
  3. Statutory compliance cap: if any required statutory-compliance field returns "No", the overall judgment is capped at the level specified in compliance rules.
  4. Incomplete safeguarding alert: any incomplete required safeguarding field triggers a red alert and blocks judgment finalisation until resolved.
Confidence Deductions

Interactive confidence meter

Toggle conditions to see confidence drop in real time.

Confidence score
50%
MODERATE
Missing Evidence · Proportional Redistribution

What happens if an evidence source is absent

Untick any source. The remaining weights rescale proportionally so they sum to 100%. The confidence score takes a 10–15% deduction per missing source.

15 · Weighted Risk Model

Prioritising inspection focus by weighted risk.

A differentiated, risk-based inspection model that weights seven risk domains to prioritise inspection focus for low-fee schools. Adjust the weights below to see the top-10 priority reshuffle.

Seven Risk Domain Weights

Adjust weights · priorities rerank live

Top 10 Priority Elements

Ranked by composite risk

    16 · Global Models Comparison

    How other inspection systems weight evidence.

    A comparative view of how six global inspection frameworks balance classroom evidence, systems evidence, data-led evidence, and stakeholder voice. ADEK's weighted model sits closest to Ofsted and KHDA in its classroom-proximate emphasis.

    System Country Primary Weight Approach Risk-Based?
    OfstedEnglandClassroom evidence + student workDeep dive sampling, leadership inquiryYes
    ERONew ZealandLeadership & self-reviewEvaluation partnership · low-stakesPartial
    InspectieNetherlandsOutcome data + classroomDifferentiated · proportionalStrong
    OECDInternationalComparative benchmarksSystem review · country-levelDesign dependent
    Singapore MOESingaporeLeadership + dataQuality assurance modelNo
    KHDADubaiClassroom + stakeholder voiceSimilar structure to ADEK UAE SIFYes
    Classroom-weighted systems

    Ofsted · KHDA · ADEK weighted

    Prioritise LO + WS because they expose learning reality. Most diagnostically honest. Highest stakes for practice.

    Data-weighted systems

    Netherlands · Singapore

    Use assessment outcomes as primary signal. Efficient but can miss pedagogical fragility in schools where data look acceptable.

    Partnership-weighted systems

    ERO (New Zealand)

    Self-evaluation and leadership-led inquiry. Low-stakes. Requires a mature evaluation culture — not yet present in most low-fee schools.

    17 · Differentiated Design

    A proposed differentiated inspection model for low-fee schools.

    A three-stage differentiated model that concentrates inspection effort on the Tier 1 cluster — where movement actually happens — while reducing over-inspection of universally strong Tier 3 elements.

    STAGE 1 · TIER 1 DEEP DIVE

    40% of inspection time

    Concentrated lesson observation and work scrutiny in Tier 1 elements: 3.1 teaching, 3.2 assessment, 4.2 curriculum adaptation, 6.1 instructional leadership. This is where Acceptable schools actually diverge from Good.

    STAGE 2 · TIER 2 SAMPLING

    35% of inspection time

    Sampled review of Tier 2 elements: 1.3 learning skills, 6.2 self-evaluation accuracy, 6.4 governance. Enough to confirm direction, not enough to duplicate Tier 1 effort.

    STAGE 3 · TIER 3 COMPLIANCE

    25% of inspection time

    Light-touch compliance verification of Tier 3: safeguarding (required by regulation), health & safety, personal development. Already universally strong — resist over-inspecting.

    Do principles

    What the differentiated model requires

    • Use the weighted calculation engine live during inspection
    • Record evidence by source and by element
    • Apply override rules explicitly and document them
    • Report confidence score alongside judgment
    • Pre-inspection risk assessment based on prior data
    • Specialist inspectors for Tier 1 classroom observation
    Don't principles

    What the model deliberately avoids

    • Treating all 71 elements as equally informative
    • Over-weighting document evidence when classroom reality diverges
    • Relying on interview evidence without triangulation
    • Generating single-inspector judgments for high-stakes elements
    • Reporting the number alone — always with confidence + override status
    • Copy-paste inspection across schools with different profiles
    Recommended next steps (from the methodology paper)
    Validate (endorse weighting parameters and override logic) → Pilot (run weighted model in parallel with current model) → Calibrate (test boundary cases, refine weights, confirm confidence deductions) → Implement (integrate into inspection system, brief teams, update QA) → Communicate (prepare materials for schools and parents).
    18 · ★ PDF Weighting Reference

    The full model, as specified.

    Every parameter on this page is sourced directly from the ADEK Inspection Weighting Model methodology paper. Use the scenario switcher below to re-run the live engine under the PDF's two main simulations. Every score you see elsewhere on this dashboard is computed from this table of element and indicator weights — no hard-coded numbers.

    Run the PDF simulations live

    Which scenario should the engine run?

    Live · Weighted Model
    Choose a scenario above.
    Live · Regular Equal-Weight
    Choose a scenario above.
    What this switcher changes
    Scenario 1 (PDF Section 6): all 17 indicators rated broadly strong. Both models should return Good with a small gap — confirming the weighted model's internal consistency when quality is genuinely good.
    PDF SECTION 3 · Evidence Weighting Architecture

    Three source-weighting groups, by indicator family

    Group Applies to Evidence sources & weights
    Group A PS1 · 1.1 & 1.2 (Attainment & Progress) LO 45% · WS 35% · SBA/Data 20%
    Group B 1.3, PS2, PS3, PS4, 5.2 LO 40% · WS 30% · DOC 20% · INT 10%
    Group C PS5.1 & all of PS6 (Safeguarding & Leadership) OBS/LW 20% · DOC 40% · INT 40%

    Group C deliberately elevates interviews and documents because safeguarding and leadership quality cannot be adequately assessed through classroom observation alone.

    PDF SECTION 5 · Full Element Weight Reference

    All 71 elements — indicator membership and element weight

    Expand each indicator below to see the PDF's element-level weight table. Element weights within an indicator always sum to 100%.

    PDF SECTION 7 · Indicator Weights in the Overall Judgment

    Scenario 2 split — attainment and progress separated

    Code Indicator Group Weight
    PDF p.8 · Judgment → Number

    Label ↔ value mapping

    Judgment Value Score range
    Outstanding65.50 – 6.00
    Very Good54.50 – 5.49
    Good43.50 – 4.49
    Acceptable32.50 – 3.49
    Weak21.50 – 2.49
    Very Weak11.00 – 1.49
    PDF p.12 · Confidence Score Bands

    Start at 100; deduct for evidence quality gaps

    Condition Deduction
    Missing evidence source−20
    Only one inspector providing evidence−10
    Contradiction between evidence sources−20
    No recent evidence data available−15

    Confidence label bands

    85–100 HIGH 70–84 SECURE 50–69 MODERATE < 50 LOW
    PDF SECTION 4 · Override & Limiting Rules

    Some risks cannot be averaged away

    RULE 1 · 5.1.1
    Safeguarding element override

    If element 5.1.1 (Care, welfare & safeguarding including child protection) is rated Weak or Very Weak, overall judgment cannot exceed Weak.

    RULE 2 · 5.1
    Safeguarding indicator override

    If overall score for indicator 5.1 (Health & Safety) is rated Weak, overall judgment cannot exceed Weak.

    RULE 3 · STATUTORY
    Statutory compliance cap

    If the school receives a statutory-compliance flag of No in any required field, overall is capped at the level specified in the compliance rules.

    RULE 4 · RED ALERT
    Incomplete safeguarding alert

    Any incomplete required safeguarding field triggers a red alert and blocks judgment finalisation until the gap is resolved.

    PDF p.55 · Simulation Results Summary

    Both scenarios and the comparative test

    Simulation Model Overall Judgment Key finding
    Scenario 1 Weighted · 1.1+1.2 combined 4.29 GOOD All indicators broadly strong — model confirms Good
    Scenario 2 Weighted · 1.1 & 1.2 separated 4.01 GOOD Separated attainment/progress — still Good, model stable
    Comparative — Weighted Weighted · high-impact indicators weak 3.49 ACCEPTABLE Core weakness correctly surfaces in final judgment
    Comparative — Regular Equal-weight · same weak evidence 3.57 GOOD Core weakness hidden by peripheral strengths
    Leadership takeaway (PDF p.55)
    The weighted model is consistent when quality is genuinely good, and sensitive when it should be. The regular model cannot make that distinction — it produces the same broad signal regardless of where performance is concentrated.
    19 · ★ Live Calculation Lab

    Watch every equation compute live.

    Add evidence entries for any element. As you type, the dashboard shows — step by step — how judgment labels become numbers, how inspector ratings average, how sources weight into an element score, how elements weight into an indicator score, and how 17 indicators combine into the overall school judgment. Weighted model and Regular Equal-Weight model are computed in parallel so you can see exactly where they diverge.

    Step 0 · Pick an element to work with

    Which element do you want to score?

    Step 1 · Inspectors log evidence

    Add judgment entries for each evidence source

    Each inspector entry is a judgment label (Outstanding → Very Weak). Multiple inspectors can log entries against the same source — the system averages them. Use the + Add entry buttons; drag the sliders to set the judgment value; remove with ✕.

    Label-to-number scale (PDF p.8): Outstanding=6 · Very Good=5 · Good=4 · Acceptable=3 · Weak=2 · Very Weak=1
    Step 2 · The cascade — every number, every equation

    Weighted Model vs Regular Equal-Weight Model

    Step 3 · Impact on the overall school judgment

    How this one element moves the whole school score

    Your live element score flows through its indicator (weighted by element weight) and then into the overall school score (weighted by indicator weight). Watch both the weighted and regular totals update in real time.

    Weighted Model · School Overall
    Delta from this element:
    Regular Model · School Overall
    Delta from this element:
    Live divergence analysis
    Log some evidence above to see the models diverge.
    Formula reference (PDF Section 2)

    The three linked levels of weighting

    LEVEL 1
    Element score
    Element = Σ (EvidenceSource × EvidenceWeight)

    Each source score (averaged across inspectors) is multiplied by its group's evidence weight.

    LEVEL 2
    Indicator score
    Indicator = Σ (ElementScore × ElementWeight)

    Elements within an indicator carry different weights (summing to 100%).

    LEVEL 3
    Overall school score
    Overall = Σ (IndicatorScore × IndicatorWeight)

    17 indicators combine to produce one number, which maps to the school's final judgment band.

    20 · ★ Evidence Document Library

    Upload evidence. Tag it. Watch the school score compute itself.

    Drop a document — a lesson observation report, a work scrutiny sample, a safeguarding policy, an interview transcript — and link it to a specific element + evidence source + judgment. The simulator immediately folds that judgment into the source average, the element rolls up via group-weighted formulas, the indicator rolls up via element weights, and the school overall recomputes. Every uploaded document persists across sessions.

    Step 1 · Drop a document and link it to an element

    Add new evidence

    📄
    Drop a document here or
    PDF · DOCX · TXT · CSV · XLSX · PNG/JPG · up to 10 MB per file
    Step 2 · Library status

    Coverage across all 71 elements

    Documents in library
    0
    Elements with evidence
    0 / 71
    Coverage %
    0%
    Step 3 · Live cascade · documents → source → element → indicator → overall

    Auto-computed from evidence library

    Every uploaded document is shown below in its full computational chain. Watch how N inspector judgments become a source average, how source averages combine via group weights into the element score, how elements roll into the indicator, and how each touched indicator contributes to the school overall.

    WEIGHTED · School Overall
    No evidence uploaded yet — using PDF baseline scenario.
    REGULAR · School Overall
    No evidence uploaded yet — using PDF baseline scenario.
    How the cascade works
    Each uploaded document contributes one judgment value to one (element, source) pair. Multiple documents on the same source average together. The source average × source weight (from group rule) → element score. Element score × element weight → indicator score. Indicator score × overall weight → contribution to school overall.
    Step 4 · Coverage matrix

    Where evidence exists, where it's missing

    Each cell represents one element. Solid coloured cells have at least one document. Hover for details. Click to filter the document table below.

    Step 5 · All documents

    Library contents

    Document Element Source Judgment Inspector Uploaded
    No evidence documents in library yet. Upload one above to begin.