93 low-fee schools and a 0.08-point gap that shifts a grade band.
An integrated leadership dashboard combining the low-fee private schools sector analysis with a live weighted-vs-regular scoring model simulator. Every judgment and score recalculates automatically as you adjust weights, scenarios, and evidence source configurations.
3.41 ACCEPTABLE
Classroom-proximate evidence (LO 40%, WS 30%) exposes structural weakness in instructional depth, assessment use, and curriculum adaptation. The weighted model treats not-all-evidence as not-equally-informative.
3.57 GOOD
Equal weighting of all evidence sources averages out structural weaknesses. Interview and document scores compensate for weak classroom evidence. The same school moves to Good.
Internal gradient in the sector
Structural, not geographic
Indian profile strongest
Pastoral & cultural foundations are universal
2.1 Personal Development appears as a strength in 100% of Very Good, 85% of Good, 86% of Acceptable schools. This is the sector's bedrock — not its differentiator.
Instructional core is the recurring gap
Critical thinking (3.1.5), differentiation (3.1.4), assessment (3.2.x) and leadership (6.1.2) recur at 90–100% of Acceptable schools. This is the red zone the sector is trapped in.
0.08 points · one grade band
Differential weighting of classroom-observable evidence shifts an Acceptable-verdict school into Good under equal-weight averaging. Evidence-source methodology matters as much as evidence itself.
Live LO / WS / DOC / INT weighting
Adjust the four evidence-source weights directly — Lesson Observations, Work Samples, Documents, Interviews — and watch the overall school verdict change in real time. Six scenario presets included.
17 indicator cards · element-level calculation
Click any of 17 indicator cards to see the full element-level calculation: evidence chips (LO/WS/DOC/INT), weighted vs regular math, priority flags for Very Weak elements, and band-divergence warnings.
Who are the low-fee schools?
The 93 schools break down by fee band, municipality, and curriculum. The sector is structurally diverse — but the weakness cluster that keeps them at Acceptable is consistent across almost all configurations.
Two fee tiers
| Band | Schools | % |
|---|---|---|
| Very Low | 41 | 44.1% |
| Low | 52 | 55.9% |
Abu Dhabi dominant
| Municipality | # | % |
|---|---|---|
| Abu Dhabi | 54 | 58.1% |
| Al Ain | 36 | 38.7% |
| Al Dhafra | 3 | 3.2% |
MoE largest, Indian strongest
| Curriculum | # | Good+ |
|---|---|---|
| MoE | 41 | 51% |
| Indian (CBSE) | 30 | 80% |
| American | 12 | 58% |
| British | 8 | 38% |
| Other | 2 | 50% |
Sector is concentrated at Acceptable
Indian schools lead outcomes
What separates Very Good from Acceptable?
Comparative consistency across the three dominant judgment bands, showing where strengths cluster and where recurring gaps appear across the framework.
Deep instructional practice
- 100% strong in 1.3 Learning Skills
- 100% strong in 3.1 Teaching
- 86% strong in 3.2 Assessment
- 86% strong in 6.1 Leadership
- Clear use of student data to differentiate
Strong foundations, patchy depth
- 95% strong in 2.1 Personal Development
- 88% strong in 5.1 Health & Safety
- Mixed in 3.1 — teaching solid in 60%
- Weak in 3.2 — assessment use in only 45%
- Leadership often transactional not strategic
Red zone is instructional
- 90%+ weak in 3.1.4 Differentiation
- 93% weak in 3.1.5 Critical thinking
- 89% weak in 3.2 Assessment
- 91% weak in 4.2 Curriculum adaptation
- 86% weak in 6.1.2 Instructional leadership
Where bands agree — and diverge
Green rows are universally strong across all three bands (low discriminating power). Red rows are the Tier 1 differentiators.
| Indicator | VG | G | A | Gap |
|---|---|---|---|---|
| 2.1 Personal Development | 100% | 95% | 86% | 14pp |
| 5.1 Health & Safety | 100% | 88% | 82% | 18pp |
| 1.3 Learning Skills | 100% | 62% | 22% | 78pp |
| 3.1 Teaching | 100% | 60% | 14% | 86pp |
| 3.2 Assessment | 86% | 45% | 11% | 75pp |
| 4.2 Curriculum Adaptation | 86% | 40% | 9% | 77pp |
| 6.1 Leadership | 86% | 50% | 14% | 72pp |
Differentiating power per indicator
Each subject tests a different aspect of the framework.
Subjects aren't just content areas — they're diagnostic lenses. Arabic SL reveals instructional design failure. Mathematics exposes reasoning pedagogy. Click any subject for the full diagnostic profile.
What each subject is designed to expose
| Subject | Diagnostic Use | Critical Test | Risk |
|---|---|---|---|
| Arabic FL | Reveals whether learning is literal or analytical | Extended writing · standard Arabic fluency | High |
| Arabic SL | Exposes instructional design weaknesses | Communicative speaking vs memorised responses | Very High |
| English | Tests literacy becoming deep comprehension | Extended writing · reading inference | High |
| Mathematics | Diagnoses reasoning pedagogy | Problem-solving in unfamiliar contexts | High |
| Science | Reveals inquiry vs recall | Independent investigation · hypothesis formation | High |
| Social Studies | Tests progression from facts to analysis | Data interpretation · cause-and-effect | Moderate–High |
| Islamic Education | Distinguishes memorisation from interpretation | Tajweed accuracy · evidence use | Moderate |
The secondary cliff and the primary strength.
Phase-level performance reveals where the sector concentrates strength (early phases, pastoral) and where it falls away (secondary depth of learning, curriculum choice).
Strength by phase
Performance drop in secondary phase
Secondary phase performance drops sharply in 1.3, 3.1, 3.2, 4.2, 6.1 — the same Tier 1 cluster.
Strongest phase
Strong in personal development, early literacy, care. Weakness in formal assessment progression.
Solid foundations
Consistent strength in 2.1, 5.1. Emerging weakness in 3.1.4 differentiation.
Transition pressure
Teaching quality variable. Assessment use declines. Critical thinking rarely explicit.
Weakest phase
Depth-of-learning cliff. Poor subject choice, weak progression, inconsistent Arabic.
The red-zone cluster keeping schools at Acceptable.
The master heatmap below shows weakness frequency across all 71 framework elements by judgment band. Red cells are Tier 1 — where intervention produces the biggest movement. Green cells are already universally strong.
Where movement comes from
Differentiation, critical thinking, assessment use, curriculum adaptation, instructional leadership. Intervention here produces the biggest grade-band movement.
Where enabling systems live
Self-evaluation accuracy, parent/community engagement, governance, curriculum design review. Needed for sustained improvement but slower to change.
Already strong — maintenance
Safeguarding, health & safety, personal development, compliance. Universally strong across the sector — keep, don't over-weight.
The five defining features of a Very Good school.
Only 7 schools in the sector reach Very Good. What separates them from Good-tier schools is not one indicator — it's a tightly coupled cluster of five features that appear together.
Pedagogical coherence
Consistent high-quality teaching across subjects and phases. Not isolated pockets — systematic. All teachers observed, not just the strongest.
Assessment drives teaching
Student data genuinely informs next lesson. Not just recorded. Visible in planning, differentiation, and targeted intervention.
Curriculum is adapted, not just delivered
The curriculum plan reflects the school's students — local context, prior attainment, cultural framing. Not a copy-paste from curriculum provider.
Accurate self-evaluation
The school's own judgment of itself matches inspection evidence within one grade band. Honest, granular, evidenced. Not promotional.
Instructional leadership, not managerial
Principal and middle leaders actively shape teaching quality. Regular observation, coaching, feedback loops. Not just compliance monitoring.
All five appear together
No Very Good school has four of five. The cluster is mutually reinforcing — each feature enables the next. An Acceptable school with one of these features drifts back without the others.
Three levels of weighted calculation, running live.
The ADEK Inspection Weighting Model calculates judgments at three nested levels. Evidence feeds into elements. Elements feed into indicators. Indicators feed into the overall school judgment. Each layer applies weights — and override rules can cap the outcome regardless of the math.
Evidence → Element
Each element receives evidence from multiple sources (lesson observation, work scrutiny, documents, interviews, data). Sources are weighted by Group A / B / C rules depending on the element type.
Element → Indicator
Elements within an indicator are not equal. A key element (e.g. 3.1.4 Differentiation) may carry 25% of its indicator while a contextual element carries 5%. Element weights are indicator-specific.
Indicator → Overall
The 17 indicators then feed the overall school judgment, each with its own weight. Overrides (safeguarding, persistent poor teaching) can cap the outcome regardless of the aggregate score.
The engine's logic
- Evidence weighting — Group A/B/C assign different source weights per element type
- Element weighting — not all elements within an indicator are equally impactful
- Indicator weighting — the 17 indicators have different weights on overall judgment
- Override rules — safeguarding failures or persistent weak teaching cap the outcome
- Confidence scoring — missing evidence, single inspector, or contradictions reduce confidence
- Missing-evidence redistribution — when a source is absent, remaining weights rescale proportionally
Equal weighting hides structural weakness
In a regular equal-weight model, strong scores in document evidence (policies, plans) average out weak classroom evidence. The weighted model prioritises classroom-proximate evidence — especially lesson observations and student work scrutiny — because these are what actually demonstrate learning quality.
The same school, on the same evidence, can be Acceptable (3.49) under the weighted model and Good (3.57) under the regular model. Methodology is not neutral.
Three groups. Three different weight profiles.
Not every element can be assessed in the same way. The framework defines three evidence groups based on what kind of evidence is most informative for that element type.
Classroom-verified attainment
- Lesson observation — 45%
- Work scrutiny — 35%
- SBA / attainment data — 20%
Applies to PS1 attainment & progress indicators (1.1, 1.2). Per PDF Section 3.
Classroom-proximate
- Lesson observations — 40%
- Work samples — 30%
- Documents — 20%
- Interviews — 10%
Applies to PS2, PS3, PS4, PS5 learning elements — the most frequently used rule.
Policy & systems led
- Documents — 40%
- Interviews — 40%
- Observation / walks — 20%
Applies to PS5.1 Safeguarding and all PS6 leadership elements.
Adjust source scores (1–6) and watch the element score recompute
This mixer uses the Group B weights (40/30/20/10). Pick a score for each evidence source — the weighted element score and judgment band update live.
Live weighted vs regular scoring simulator.
Adjust the four evidence-source weights — Lesson Observations, Work Samples, Documents, Interviews — and watch the overall school verdict recompute against the real 17-indicator, 71-element model. Six scenario presets reproduce the comparative tests from the methodology paper.
Configure how each evidence type is counted
The default reflects the PDF methodology: LO 40 · WS 30 · DOC 20 · INT 10. Try a preset or build your own. The Regular model above does not change — it is the flat 25/25/25/25 baseline.
Weighted vs Regular at the indicator level
Where the two models disagree most
Orange bars = indicator where weighted model produces a lower score than regular. These are the indicators that reveal structural weakness when evidence is weighted properly.
Every indicator score, both models, live
| Indicator | Weight | Weighted | Regular | Δ | Band Change |
|---|
17 indicator cards · click any card for the full calculation.
Every indicator card shows its weighted score, regular score, and whether the band diverges between the two models. Cards with an orange left border are where methodology changes the verdict. Click any card for the element-level evidence breakdown with LO / WS / DOC / INT chips and priority flags.
The full school judgment, computed live
Every indicator's weighted and regular score, multiplied by its overall weight, summing to the school total. Adjust any indicator weight slider to see the overall recompute. Click any row to dive into that indicator's full equation cascade.
| Code | Indicator | Group | Weight in overall | W. score | R. score | W. contrib. | R. contrib. | Δ band |
|---|---|---|---|---|---|---|---|---|
| TOTAL CONTRIBUTIONS → | — | — | ||||||
| ÷ sum of weights → | — | |||||||
How to read this: Each row shows one indicator's live weighted and regular scores (computed from current element-level evidence values). The contribution columns show score × weight. The TOTAL row sums all contributions and divides by the sum of weights to produce the school overall. Drag any weight slider to redistribute influence and watch the overall band shift.
71 sliders. Live element-level calculation.
The full element-level calculator. 17 accordions for 17 indicators. 71 element sliders (1–6 scale). Every adjustment recomputes: element → indicator → overall, with side-by-side weighted and regular scores.
Same school. Same evidence. Two verdicts.
A side-by-side walk-through of how the weighted and regular models diverge on the core comparative test from the methodology paper. The difference is not small edge cases — it is a full grade band on identical underlying evidence.
Where the 0.08 gap comes from
What each model is saying
Both are mathematically correct. Only the weighted model is diagnostically useful — it tells inspectors and leaders where to intervene.
Four simulation runs from the methodology paper
| Scenario | Weighted | Regular | Δ | Verdict |
|---|---|---|---|---|
| 1 · Strong school | 4.29 | 4.32 | -0.03 | VERY GOOD (both) |
| 2 · 1.1/1.2 split | 4.01 | 4.08 | -0.07 | GOOD (both) |
| 3 · Core-weakness test | 3.49 | 3.57 | -0.08 | ACC vs GOOD |
| 4 · Very weak school | 3.69 | 3.72 | -0.03 | GOOD (both) |
The weighted model diverges from the regular model most sharply at the Good/Acceptable threshold — which is exactly where inspection stakes are highest. At both the top and bottom of the scale, the two models agree.
When the calculation is not final.
The weighted model's output is subject to override rules (hard caps on certain failures) and a separate confidence score that reflects evidence quality. Both run live.
Four hard caps on the outcome
- Safeguarding element override (5.1.1): if Care, welfare & safeguarding including child protection is rated Weak or Very Weak, overall judgment cannot exceed Weak.
- Safeguarding indicator override (5.1): if Health & Safety overall is rated Weak, the overall judgment cannot exceed Weak.
- Statutory compliance cap: if any required statutory-compliance field returns "No", the overall judgment is capped at the level specified in compliance rules.
- Incomplete safeguarding alert: any incomplete required safeguarding field triggers a red alert and blocks judgment finalisation until resolved.
Interactive confidence meter
Toggle conditions to see confidence drop in real time.
What happens if an evidence source is absent
Untick any source. The remaining weights rescale proportionally so they sum to 100%. The confidence score takes a 10–15% deduction per missing source.
Prioritising inspection focus by weighted risk.
A differentiated, risk-based inspection model that weights seven risk domains to prioritise inspection focus for low-fee schools. Adjust the weights below to see the top-10 priority reshuffle.
Adjust weights · priorities rerank live
Ranked by composite risk
How other inspection systems weight evidence.
A comparative view of how six global inspection frameworks balance classroom evidence, systems evidence, data-led evidence, and stakeholder voice. ADEK's weighted model sits closest to Ofsted and KHDA in its classroom-proximate emphasis.
| System | Country | Primary Weight | Approach | Risk-Based? |
|---|---|---|---|---|
| Ofsted | England | Classroom evidence + student work | Deep dive sampling, leadership inquiry | Yes |
| ERO | New Zealand | Leadership & self-review | Evaluation partnership · low-stakes | Partial |
| Inspectie | Netherlands | Outcome data + classroom | Differentiated · proportional | Strong |
| OECD | International | Comparative benchmarks | System review · country-level | Design dependent |
| Singapore MOE | Singapore | Leadership + data | Quality assurance model | No |
| KHDA | Dubai | Classroom + stakeholder voice | Similar structure to ADEK UAE SIF | Yes |
Ofsted · KHDA · ADEK weighted
Prioritise LO + WS because they expose learning reality. Most diagnostically honest. Highest stakes for practice.
Netherlands · Singapore
Use assessment outcomes as primary signal. Efficient but can miss pedagogical fragility in schools where data look acceptable.
ERO (New Zealand)
Self-evaluation and leadership-led inquiry. Low-stakes. Requires a mature evaluation culture — not yet present in most low-fee schools.
A proposed differentiated inspection model for low-fee schools.
A three-stage differentiated model that concentrates inspection effort on the Tier 1 cluster — where movement actually happens — while reducing over-inspection of universally strong Tier 3 elements.
40% of inspection time
Concentrated lesson observation and work scrutiny in Tier 1 elements: 3.1 teaching, 3.2 assessment, 4.2 curriculum adaptation, 6.1 instructional leadership. This is where Acceptable schools actually diverge from Good.
35% of inspection time
Sampled review of Tier 2 elements: 1.3 learning skills, 6.2 self-evaluation accuracy, 6.4 governance. Enough to confirm direction, not enough to duplicate Tier 1 effort.
25% of inspection time
Light-touch compliance verification of Tier 3: safeguarding (required by regulation), health & safety, personal development. Already universally strong — resist over-inspecting.
What the differentiated model requires
- Use the weighted calculation engine live during inspection
- Record evidence by source and by element
- Apply override rules explicitly and document them
- Report confidence score alongside judgment
- Pre-inspection risk assessment based on prior data
- Specialist inspectors for Tier 1 classroom observation
What the model deliberately avoids
- Treating all 71 elements as equally informative
- Over-weighting document evidence when classroom reality diverges
- Relying on interview evidence without triangulation
- Generating single-inspector judgments for high-stakes elements
- Reporting the number alone — always with confidence + override status
- Copy-paste inspection across schools with different profiles
The full model, as specified.
Every parameter on this page is sourced directly from the ADEK Inspection Weighting Model methodology paper. Use the scenario switcher below to re-run the live engine under the PDF's two main simulations. Every score you see elsewhere on this dashboard is computed from this table of element and indicator weights — no hard-coded numbers.
Which scenario should the engine run?
Three source-weighting groups, by indicator family
| Group | Applies to | Evidence sources & weights |
|---|---|---|
| Group A | PS1 · 1.1 & 1.2 (Attainment & Progress) | LO 45% · WS 35% · SBA/Data 20% |
| Group B | 1.3, PS2, PS3, PS4, 5.2 | LO 40% · WS 30% · DOC 20% · INT 10% |
| Group C | PS5.1 & all of PS6 (Safeguarding & Leadership) | OBS/LW 20% · DOC 40% · INT 40% |
Group C deliberately elevates interviews and documents because safeguarding and leadership quality cannot be adequately assessed through classroom observation alone.
All 71 elements — indicator membership and element weight
Expand each indicator below to see the PDF's element-level weight table. Element weights within an indicator always sum to 100%.
Scenario 2 split — attainment and progress separated
| Code | Indicator | Group | Weight |
|---|
Label ↔ value mapping
| Judgment | Value | Score range |
|---|---|---|
| Outstanding | 6 | 5.50 – 6.00 |
| Very Good | 5 | 4.50 – 5.49 |
| Good | 4 | 3.50 – 4.49 |
| Acceptable | 3 | 2.50 – 3.49 |
| Weak | 2 | 1.50 – 2.49 |
| Very Weak | 1 | 1.00 – 1.49 |
Start at 100; deduct for evidence quality gaps
| Condition | Deduction |
|---|---|
| Missing evidence source | −20 |
| Only one inspector providing evidence | −10 |
| Contradiction between evidence sources | −20 |
| No recent evidence data available | −15 |
Confidence label bands
Some risks cannot be averaged away
If element 5.1.1 (Care, welfare & safeguarding including child protection) is rated Weak or Very Weak, overall judgment cannot exceed Weak.
If overall score for indicator 5.1 (Health & Safety) is rated Weak, overall judgment cannot exceed Weak.
If the school receives a statutory-compliance flag of No in any required field, overall is capped at the level specified in the compliance rules.
Any incomplete required safeguarding field triggers a red alert and blocks judgment finalisation until the gap is resolved.
Both scenarios and the comparative test
| Simulation | Model | Overall | Judgment | Key finding |
|---|---|---|---|---|
| Scenario 1 | Weighted · 1.1+1.2 combined | 4.29 | GOOD | All indicators broadly strong — model confirms Good |
| Scenario 2 | Weighted · 1.1 & 1.2 separated | 4.01 | GOOD | Separated attainment/progress — still Good, model stable |
| Comparative — Weighted | Weighted · high-impact indicators weak | 3.49 | ACCEPTABLE | Core weakness correctly surfaces in final judgment |
| Comparative — Regular | Equal-weight · same weak evidence | 3.57 | GOOD | Core weakness hidden by peripheral strengths |
Watch every equation compute live.
Add evidence entries for any element. As you type, the dashboard shows — step by step — how judgment labels become numbers, how inspector ratings average, how sources weight into an element score, how elements weight into an indicator score, and how 17 indicators combine into the overall school judgment. Weighted model and Regular Equal-Weight model are computed in parallel so you can see exactly where they diverge.
Which element do you want to score?
Add judgment entries for each evidence source
Each inspector entry is a judgment label (Outstanding → Very Weak). Multiple inspectors can log entries against the same source — the system averages them. Use the + Add entry buttons; drag the sliders to set the judgment value; remove with ✕.
Weighted Model vs Regular Equal-Weight Model
How this one element moves the whole school score
Your live element score flows through its indicator (weighted by element weight) and then into the overall school score (weighted by indicator weight). Watch both the weighted and regular totals update in real time.
The three linked levels of weighting
Each source score (averaged across inspectors) is multiplied by its group's evidence weight.
Elements within an indicator carry different weights (summing to 100%).
17 indicators combine to produce one number, which maps to the school's final judgment band.
Upload evidence. Tag it. Watch the school score compute itself.
Drop a document — a lesson observation report, a work scrutiny sample, a safeguarding policy, an interview transcript — and link it to a specific element + evidence source + judgment. The simulator immediately folds that judgment into the source average, the element rolls up via group-weighted formulas, the indicator rolls up via element weights, and the school overall recomputes. Every uploaded document persists across sessions.
Add new evidence
Coverage across all 71 elements
Auto-computed from evidence library
Every uploaded document is shown below in its full computational chain. Watch how N inspector judgments become a source average, how source averages combine via group weights into the element score, how elements roll into the indicator, and how each touched indicator contributes to the school overall.
Where evidence exists, where it's missing
Each cell represents one element. Solid coloured cells have at least one document. Hover for details. Click to filter the document table below.
Library contents
| Document | Element | Source | Judgment | Inspector | Uploaded | ||
|---|---|---|---|---|---|---|---|
| No evidence documents in library yet. Upload one above to begin. | |||||||