Part 1 · CSEE

22 years of Tanzania's Form 4 results: what the numbers say

2003 – 2025  ·  enrolled secondary schools  ·  8.5 million candidates

School growth (2003 → 2025)
Candidate growth (2003 → 2025)
consecutive years of rising pass rates

Pass rate: Division I–IV out of candidates who sat. Absent and withheld excluded.

Chapter 1

How did 61K students become 570K?

In 2003, roughly 61K students sat Tanzania's Form 4 examinations. In 2025, that number was 570K, a 9× expansion over 22 years. The school count grew in parallel: from 908 schools in 2003 to 5,864 in 2025.

This chapter traces the raw growth. The question that follows every other chapter is: did the quality of outcomes scale with the volume?

Candidates and schools per year

TETEA 2003–2021   NECTA 2022–2025. Data from two separate sources with slightly different collection methods (see Chapter 2 for the 2022 transition note). 2014 note: The 2014 bar is lower than neighboring years (245K vs 365K in 2013). This is confirmed real data: two independent scrapers arrived at the same count, and zero schools are missing student rows. The lower figure reflects a genuine anomaly year: 29% Division 0 and lower average school enrollment, not a data gap.

What this raises

  • The school count grew 6.5× but candidate count grew 9.4×; classes are getting larger on average. What does that mean for learning conditions?
  • Expansion was not steady: there are visible acceleration points. What policy changes drove them?
Chapter 2

What changed in 2012?

The pass rate did not drop in 2013. It bottomed out in 2012, at 34.5%, the worst year in the 22-year record. By 2013, it had already recovered to 60.8%. The real story is a collapse driven by rapid expansion from 2008 to 2012, followed by a long, sustained recovery to 94.9% in 2025.

Division composition by year

Stacked bars show the proportion of each division. Division 0 appears from 2013 onwards. NECTA data starts 2022; minor methodology differences may affect comparability.

Pass rate (Division I–IV ÷ students who sat)

Pass rate calculated from students who sat the exam (excluding absent and withheld candidates). Pre-2013 values are near 100% because Division 0 was not yet recorded; they are not comparable to post-2013 figures.

Inference: see note

What the data shows: Failure rates rose from under 12% in 2003–2007 to 65.5% by 2012 as school enrollment nearly tripled. The system then recovered steadily, reaching 5% in 2025.

What this might mean: Rapid school expansion under the Secondary Education Development Programme (SEDP) (2007–2012) outpaced teacher supply and infrastructure. Recovery since 2013 may reflect policy interventions, but which factor drove how much is not determinable from this data alone.

Alternative explanations: Cohort composition shifts, examination difficulty changes, or grading methodology differences could each partially explain the pattern.

What this raises

  • Failure rates peaked at 65.5% in 2012, during the height of rapid expansion. By 2025 they are at 5%. What changed in the system between 2012 and 2025?
  • The recovery has been unbroken for 13 years. Is it structural improvement, or is the exam getting easier?
  • Division I rates also improved: from 1.3% in 2012 to 10.2% in 2025. Is the top end genuinely stronger, or just less crowded at the bottom?
Chapter 3

When did girls quietly take over?

For more than a decade, boys were the majority of CSEE candidates. In 2013, 53.8% of secondary school candidates were male. Then, between 2014 and 2015, the numbers crossed. By 2025, 53.3% of all candidates are female, a sustained reversal that has held for ten years.

Whether this reflects policy changes, social shifts, or both is beyond what the data alone can determine. What the data shows clearly is the crossover and what happened to pass rates on each side of it.

Male vs female candidates per year

Secondary school candidates with sex recorded. Open-centre candidates excluded. 67 rows with unknown sex omitted (<0.001%).

Pass rate by gender, annual trend

Pass rate = Division I–IV ÷ students who sat (excluding absent/withheld). Pre-2013 pass rates are not comparable (see Chapter 2 data note).

What this raises

  • The crossover happened around 2015, the same year the Free Education Policy is documented to have expanded access. [Inference: whether the policy drove the gender shift requires external evidence]
  • By 2025, pass rates differ slightly between boys and girls. Is that gap closing, widening, or stable?
  • Are there regions where the gender crossover happened earlier or later than the national average?
Chapter 4

Which schools have stayed at the top for 22 years?

Out of 5,864 secondary schools active in 2025, only 50 have appeared in 20 or more consecutive years of data. Among them, a small group has maintained extraordinary Division I rates, not in one year but consistently, across administrations, across the 2013 inflection, across everything.

These are not just top-performers in a single year. They are institutions that have held their standard for two decades.

Top 20 schools by average Division I rate (20+ years of data)

Sparklines show Division I rate per year across all recorded years. Gaps indicate years without data.

What this raises

  • Several schools maintained near-perfect Division I rates even through the 2013 national inflection. What made them resilient?
  • Consistency and absolute performance may diverge: a school with 70% Div I every year outranks one with 99% one year and 40% the next. Is that the right way to rank?
  • Are the consistently top schools clustered in specific regions, or distributed nationally?
Chapter 5

Does your region shape your results?

Tanzania has 31 administrative regions, including Zanzibar. Over 22 years, the data shows persistent gaps in Division I rates between regions, gaps that have not closed despite national expansion. This chapter ranks regions by their long-run average Division I rate, then looks at whether the rankings are stable or shifting over time.

Data note: Zanzibar regions

Zanzibar (Pemba and Unguja islands) has its own examination authority (KARSS) and only a small subset of Zanzibar schools appear in this NECTA dataset. Division I rates for Zanzibar regions reflect that limited sample and should not be interpreted as representative of Zanzibar education outcomes overall.

Regions ranked by average Division I rate (all available years)

Showing mainland Tanzania regions only. Division I rate = schools' combined Div I students ÷ students who sat.

Top 3 mainland regions: Division I rate trend (2003–2025)

Top 3 regions by average Division I rate across all years. * Zanzibar regions and open-centre candidates excluded.

What this raises

  • The regional gap appears persistent. Are top regions pulling further ahead, or is there convergence?
  • Within underperforming regions, are there individual schools that beat the regional average consistently? (The data has them.)
  • Tanzania's region count grew 2012–2015 as new regions were created. How does boundary change affect historical comparisons?
Chapter 6

After 22 years and 8.5mil students: is the gap closing?

The previous chapters told the system story: expansion, the 2013 shift, the gender reversal, resilient schools, regional inequality. This chapter asks the harder question: what does the data look like at the very edges?

Three sections. The schools that produced a perfect Division I class. The schools that have struggled persistently. And the individuals who defied their surroundings.

1. The perfect class

Across 22 years and 8.5 million candidates, school-years produced a result where every single student passed in Division I (minimum 5 students). These are the data's ceiling moments.

Schools that produced an all-Division-I class

Ranked by class size (largest cohorts first). School years with fewer than 5 students excluded.

2. The persistent floor

At the other end: schools have maintained an average Division 0 rate above 30% over 10 or more years. The framing here is systemic, not individual. These results reflect the conditions these schools operate in, not the character of their students or teachers.

Schools with sustained Division 0 rates above 30% (10+ years)

Average Division 0 rate calculated over all years the school appears in the dataset.

3. The needle

The most human data points in 8.5 million records: school-years where exactly one student passed in Division I while 80% or more of their classmates received Division 0. These candidates are not statistical outliers; they are people who sat in the same classroom and achieved a categorically different result.

4. The perfect score

Division I is subdivided by aggregate score: lower aggregates mean better grades across subjects. An aggregate of 7 (one point per subject across 7 subjects) is the highest performance possible within Division I. In 22 years and 8.5 million candidates, only one school has achieved a class where every single student scored aggregate 7.

What this raises

  • The system grew 9.4× in candidate volume. Did the number of perfect-class school-years grow proportionally, or did scale dilute excellence?
  • The schools at the persistent bottom are real institutions with real students. What resource gap explains 30%+ Division 0 rates sustained over a decade?
  • The needle students passed in schools where almost everyone else failed. What does that suggest about the role of individual factors vs. systemic ones?
Chapter 7

Does class size predict your chances?

NECTA sets a threshold of 40 students for a school to be classified as independently viable. But does being above or below that line correlate with outcomes? Across 22 years, smaller schools (fewer than 40 students sitting) and larger schools track differently on Division I rates. The pattern is not always what you might expect.

Average Division I rate: small vs large schools (2003–2025)

Small = fewer than 40 students sat; large = 40 or more. Average is school-level (each school weighted equally, not by size). * Zanzibar regions and open-centre candidates excluded.

What this raises

  • Small schools have fewer students, which can inflate or deflate rates with one strong cohort. Is the small-school advantage consistent, or driven by a few elite seminaries?
  • If smaller schools outperform, what does that suggest about the optimal class size for Form 4 outcomes in Tanzania?
  • The 40-student NECTA threshold is administrative. Does the data suggest a different natural breakpoint?
Chapter 8

Does school type predict outcomes?

Single-sex schools dominated the top of the Chapter 4 ranking. But is that specific to those elite institutions, or is there a broader pattern? Each school in this dataset is classified by its actual student sex distribution across all 22 years: schools where fewer than 2% of students are the minority sex are classified as boys-only or girls-only; all others are coed. This catches seminaries (all-male by enrollment) alongside formally designated single-sex schools.

Average Division I rate by school type (2003–2025)

School type derived from historical enrollment sex distribution (2% threshold). Coed line shown for scale; single-sex schools operate at a structurally different level. * Zanzibar regions and open-centre candidates excluded.

What this raises

  • The gap between single-sex and coed schools is large. How much of it is explained by the fact that most top-performing schools happen to be single-sex, versus single-sex structure itself driving outcomes?
  • Boys-only and girls-only schools both outperform coed by a wide margin. Does the gap between boys-only and girls-only change over time?
  • Most single-sex schools in this dataset are faith-based boarding schools. Is the variable school type, or boarding, or religious mission, or small class size?