As you may have read, the 2013 National Assessment of Educational Progress (NAEP) results were released this week, showing that nation-wide:

  • For 8th grade: the average math score increased one point and reading score increased three points since 2011.
  • For 4th grade: the average math score increased one point but reading scores remained stagnant.

The announcement of the 2103 NAEP scores begs the questions:

  1. What is the NAEP and why should we care about NAEP scores?
  2. How do the NAEP scores relate to state test scores?

A 2010 report from the Center for Educational Progress (CEP), State Test Score Trends through 2008-09, Part 1: Rising Scores on State Tests and NAEP, answers both questions.

What is the NAEP and why should we care about NAEP scores?

NAEP is overseen by the U.S. Department of Education and is designed to track the progress of U.S. students in key subjects at the national and state levels.  NAEP is known as “the nation’s report card”.

NAEP encompasses two assessment programs. The main NAEP assessment reports national results at grades 4, 8, and 12 and state-by-state results at grades 4 and 8, including trends from the 1990s. The main NAEP is administered every two years in reading and math and less often in other subjects. The other NAEP assessment program, the long-term trend NAEP, is given every four years in reading and math and reports only national results going back to the 1970s.

NAEP differs from state tests in several important respects:

  • Samples of students versus all students. NAEP assessments are designed to be administered periodically to representative samples of students in selected schools within each state, rather than annually to virtually all students in a state, as state assessments are. Each NAEP participant takes only a portion of the larger assessment instead of the entire test. Consequently, NAEP cannot produce scores for individual students or schools.
  • Different content, format, and administration. NAEP differs from state tests—to varying degrees, depending on the state—in the content assessed, the test question formats, the rigor of the achievement levels, the testing environment, and other features. In addition, state tests are typically administered by students’ own teachers, while NAEP is administered by independent test proctors.
  • Different standards for content. While state tests are designed to measure how well students have learned the knowledge and skills embodied in each state’s academic content standards, NAEP is not deliberately aligned to any state’s standards. Rather, NAEP’s content is based on frameworks developed by a National Assessment Governing Board, which is appointed by the U.S. Secretary of Education.
  • Different “proficiency” definitions. The term “proficient” often means fundamentally different things on state tests and NAEP. The NAEP definition of proficient is aspirational, signaling where students should be in a subject area. Because state tests are used for high-stakes accountability purposes, states are under pressure to set realistic definitions of proficiency that take into account students’ current level of achievement. State definitions of proficiency vary; while some are more aspirational than others, most are less ambitious than the NAEP definition.  The report goes on to state that “proficient” on most state tests is not really comparable to the proficient level on NAEP. Rather, it is more appropriate to compare the percentage scoring at or above the proficient level on state tests with the percentage scoring at or above the “basic” level on NAEP.
  • High stakes and low stakes. NAEP scores are not tied to specific consequences for individual students, teachers, schools, or districts, as state test scores are.

In light of these differences, it is not surprising that the state tests and NAEP sometimes produce different results.  To see your state’s performance on the 2013 NAEP math and reading tests, go to NAEP state profiles.