2003 – 2025 · enrolled secondary schools · 8.5 million candidates
Pass rate: Division I–IV out of candidates who sat. Absent and withheld excluded.
In 2003, roughly 61K students sat Tanzania's Form 4 examinations. In 2025, that number was 570K, a 9× expansion over 22 years. The school count grew in parallel: from 908 schools in 2003 to 5,864 in 2025.
This chapter traces the raw growth. The question that follows every other chapter is: did the quality of outcomes scale with the volume?
TETEA 2003–2021 NECTA 2022–2025. Data from two separate sources with slightly different collection methods (see Chapter 2 for the 2022 transition note). 2014 note: The 2014 bar is lower than neighboring years (245K vs 365K in 2013). This is confirmed real data: two independent scrapers arrived at the same count, and zero schools are missing student rows. The lower figure reflects a genuine anomaly year: 29% Division 0 and lower average school enrollment, not a data gap.
What this raises
The pass rate did not drop in 2013. It bottomed out in 2012, at 34.5%, the worst year in the 22-year record. By 2013, it had already recovered to 60.8%. The real story is a collapse driven by rapid expansion from 2008 to 2012, followed by a long, sustained recovery to 94.9% in 2025.
Stacked bars show the proportion of each division. Division 0 appears from 2013 onwards. NECTA data starts 2022; minor methodology differences may affect comparability.
Pass rate calculated from students who sat the exam (excluding absent and withheld candidates). Pre-2013 values are near 100% because Division 0 was not yet recorded; they are not comparable to post-2013 figures.
Inference: see note
What the data shows: Failure rates rose from under 12% in 2003–2007 to 65.5% by 2012 as school enrollment nearly tripled. The system then recovered steadily, reaching 5% in 2025.
What this might mean: Rapid school expansion under the Secondary Education Development Programme (SEDP) (2007–2012) outpaced teacher supply and infrastructure. Recovery since 2013 may reflect policy interventions, but which factor drove how much is not determinable from this data alone.
Alternative explanations: Cohort composition shifts, examination difficulty changes, or grading methodology differences could each partially explain the pattern.
What this raises
For more than a decade, boys were the majority of CSEE candidates. In 2013, 53.8% of secondary school candidates were male. Then, between 2014 and 2015, the numbers crossed. By 2025, 53.3% of all candidates are female, a sustained reversal that has held for ten years.
Whether this reflects policy changes, social shifts, or both is beyond what the data alone can determine. What the data shows clearly is the crossover and what happened to pass rates on each side of it.
Secondary school candidates with sex recorded. Open-centre candidates excluded. 67 rows with unknown sex omitted (<0.001%).
Pass rate = Division I–IV ÷ students who sat (excluding absent/withheld). Pre-2013 pass rates are not comparable (see Chapter 2 data note).
What this raises
Out of 5,864 secondary schools active in 2025, only 50 have appeared in 20 or more consecutive years of data. Among them, a small group has maintained extraordinary Division I rates, not in one year but consistently, across administrations, across the 2013 inflection, across everything.
These are not just top-performers in a single year. They are institutions that have held their standard for two decades.
Sparklines show Division I rate per year across all recorded years. Gaps indicate years without data.
What this raises
Tanzania has 31 administrative regions, including Zanzibar. Over 22 years, the data shows persistent gaps in Division I rates between regions, gaps that have not closed despite national expansion. This chapter ranks regions by their long-run average Division I rate, then looks at whether the rankings are stable or shifting over time.
Data note: Zanzibar regions
Zanzibar (Pemba and Unguja islands) has its own examination authority (KARSS) and only a small subset of Zanzibar schools appear in this NECTA dataset. Division I rates for Zanzibar regions reflect that limited sample and should not be interpreted as representative of Zanzibar education outcomes overall.
Showing mainland Tanzania regions only. Division I rate = schools' combined Div I students ÷ students who sat.
Top 3 regions by average Division I rate across all years. * Zanzibar regions and open-centre candidates excluded.
What this raises
The previous chapters told the system story: expansion, the 2013 shift, the gender reversal, resilient schools, regional inequality. This chapter asks the harder question: what does the data look like at the very edges?
Three sections. The schools that produced a perfect Division I class. The schools that have struggled persistently. And the individuals who defied their surroundings.
Across 22 years and 8.5 million candidates, … school-years produced a result where every single student passed in Division I (minimum 5 students). These are the data's ceiling moments.
Ranked by class size (largest cohorts first). School years with fewer than 5 students excluded.
At the other end: … schools have maintained an average Division 0 rate above 30% over 10 or more years. The framing here is systemic, not individual. These results reflect the conditions these schools operate in, not the character of their students or teachers.
Average Division 0 rate calculated over all years the school appears in the dataset.
The most human data points in 8.5 million records: school-years where exactly one student passed in Division I while 80% or more of their classmates received Division 0. These candidates are not statistical outliers; they are people who sat in the same classroom and achieved a categorically different result.
Division I is subdivided by aggregate score: lower aggregates mean better grades across subjects. An aggregate of 7 (one point per subject across 7 subjects) is the highest performance possible within Division I. In 22 years and 8.5 million candidates, only one school has achieved a class where every single student scored aggregate 7.
What this raises
NECTA sets a threshold of 40 students for a school to be classified as independently viable. But does being above or below that line correlate with outcomes? Across 22 years, smaller schools (fewer than 40 students sitting) and larger schools track differently on Division I rates. The pattern is not always what you might expect.
Small = fewer than 40 students sat; large = 40 or more. Average is school-level (each school weighted equally, not by size). * Zanzibar regions and open-centre candidates excluded.
What this raises
Single-sex schools dominated the top of the Chapter 4 ranking. But is that specific to those elite institutions, or is there a broader pattern? Each school in this dataset is classified by its actual student sex distribution across all 22 years: schools where fewer than 2% of students are the minority sex are classified as boys-only or girls-only; all others are coed. This catches seminaries (all-male by enrollment) alongside formally designated single-sex schools.
School type derived from historical enrollment sex distribution (2% threshold). Coed line shown for scale; single-sex schools operate at a structurally different level. * Zanzibar regions and open-centre candidates excluded.
What this raises