Skip Navigation
Click to open navigation

Appendix A. Guide to Sources

The indicators in this report present data from a variety of sources. Brief descriptions of these sources and their data collections and data collection methods are presented below, grouped by sponsoring organization. Most of these sources are federal surveys and many are conducted by the National Center for Education Statistics (NCES).

The data were collected using many research methods, including surveys of a universe (such as all colleges) or of a sample and compilations of administrative records.

National Center for Education Statistics (NCES)

Common Core of Data

The Common Core of Data (CCD) is NCES’s primary database on public elementary and secondary education in the United States. It is a comprehensive, annual, national statistical database of all public elementary and secondary schools and school districts containing data designed to be comparable across all states. This database can be used to select samples for other NCES surveys and provide basic information and descriptive statistics on public elementary and secondary schools and schooling in general.

The CCD collects statistical information annually from approximately 100,000 public elementary and secondary schools and approximately 18,000 public school districts (including supervisory unions and regional education service agencies) in the 50 states, the District of Columbia, Department of Defense (DoD) dependents schools, the Bureau of Indian Education (BIE), Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. Three categories of information are collected in the CCD survey: general descriptive information on schools and school districts, data on students and staff, and fiscal data. The general school and district descriptive information includes name, address, phone number, and type of locale; the data on students and staff include selected demographic characteristics; and the fiscal data pertain to revenues and current expenditures.

The EDFacts data collection system is the primary collection tool for the CCD. NCES works collaboratively with the Department of Education’s Performance Information Management Service to develop the CCD collection procedures and data definitions. Coordinators from state education agencies (SEAs) submit the CCD data at different levels (school, agency, and state) to the EDFacts collection system. Prior to submitting CCD files to EDFacts, SEAs must collect and compile information from their respective local education agencies (LEAs) through established administrative records systems within their state or jurisdiction.

Once SEAs have completed their submissions, the CCD survey staff analyzes and verifies the data for quality assurance. Even though the CCD is a universe collection and thus not subject to sampling errors, nonsampling errors can occur. The two potential sources of nonsampling errors are nonresponse and inaccurate reporting. NCES attempts to minimize nonsampling errors through the use of annual training of SEA coordinators, extensive quality reviews, and survey editing procedures. In addition, each year SEAs are given the opportunity to revise their state-level aggregates from the previous survey cycle.

The CCD survey consists of five components: The Public Elementary/Secondary School Universe Survey, the Local Education Agency (School District) Universe Survey, the State Nonfiscal Survey of Public Elementary/Secondary Education, the National Public Education Financial Survey (NPEFS), and the School District Finance Survey (F-33).

Public Elementary/Secondary School Universe Survey

The Public Elementary/Secondary School Universe Survey includes all public schools providing education services to prekindergarten (preK), kindergarten, grades 1–13, and ungraded students. Grade 13 designates high school students who are enrolled in programs where they can earn college credit in an extended high school environment, or career and technical (CTE) students in a high school program that continues beyond grade 12. For school year (SY) 2015–16, the survey included records for each public elementary and secondary school in the 50 states, the District of Columbia, the DoD dependents schools (overseas and domestic), the Bureau of Indian Education (BIE), Puerto Rico, American Samoa, the Northern Mariana Islands, Guam, and the U.S. Virgin Islands.

The Public Elementary/Secondary School Universe Survey includes data for the following variables: NCES school ID number, state school ID number, name of the school, name of the agency that operates the school, mailing address, physical location address, phone number, school type, operational status, locale code, latitude, longitude, county number, county name, full-time-equivalent (FTE) classroom teacher count, low/high grade span offered, congressional district code, school level, students eligible for free lunch, students eligible for reduced-price lunch, total students eligible for free and reduced-price lunch, and student totals and detail (by grade, by race/ethnicity, and by sex). The survey also contains flags indicating whether a school is Title I eligible, schoolwide Title I eligible, a magnet school, a charter school, a shared-time school, or a BIE school, as well as which grades are offered at the school.

State Nonfiscal Survey of Public Elementary/Secondary Education

The State Nonfiscal Survey of Public Elementary/Secondary Education for the 2015–16 school year provides state-level, aggregate information about students and staff in public elementary and secondary education. It includes data from the 50 states, the District of Columbia, Puerto Rico, the U.S. Virgin Islands, the Northern Mariana Islands, Guam, and American Samoa. The DoD dependents schools (overseas and domestic) and the BIE are also included in the survey universe. This survey covers public school student membership by grade, race/ethnicity, and state or jurisdiction and covers number of staff in public schools by category and state or jurisdiction. Beginning with the 2006–07 school year, the number of diploma recipients and other high school completers are no longer included in the State Nonfiscal Survey of Public Elementary/Secondary Education File. These data are now published in the public-use CCD State Dropout and Completion Data File.

Further information on the nonfiscal CCD data may be obtained from

Patrick Keaton
Elementary and Secondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
https://nces.ed.gov/ccd

Further information on the fiscal CCD data may be obtained from

Stephen Cornman
Elementary and Secondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
https://nces.ed.gov/ccd

EDFacts

EDFacts is a centralized data collection through which state education agencies submit preK–12 education data to the U.S. Department of Education (ED). All data in EDFacts are organized into “data groups” and reported to ED using defined file specifications. Depending on the data group, state education agencies may submit aggregate counts for the state as a whole or detailed counts for individual schools or school districts. EDFacts does not collect student-level records. The entities that are required to report EDFacts data vary by data group but may include the 50 states, the District of Columbia, the Department of Defense (DoD) dependents schools, the Bureau of Indian Education, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. More information about EDFacts file specifications and data groups can be found at https://www.ed.gov/EDFacts.

EDFacts is a universe collection and is not subject to sampling error, but nonsampling errors such as nonresponse and inaccurate reporting may occur. The U.S. Department of Education attempts to minimize nonsampling errors by training data submission coordinators and reviewing the quality of state data submissions. However, anomalies may still be present in the data.

Differences in state data collection systems may limit the comparability of EDFacts data across states and across time. To build EDFacts files, state education agencies rely on data that were reported by their schools and school districts. The systems used to collect these data are evolving rapidly and differ from state to state.

In some cases, EDFacts data may not align with data reported on state education agency websites. States may update their websites on schedules different from those they use to report data to ED. Furthermore, ED may use methods for protecting the privacy of individuals represented within the data that could be different from the methods used by an individual state.

EDFacts data on homeless students enrolled in public schools are collected in data group 655 within file 118. EDFacts data on English language learners enrolled in public schools are collected in data group 678 within file 141. EDFacts four-year adjusted cohort graduation rate (ACGR) data are collected in data group 695 within file 150 and in data group 696 within file 151. EDFacts collects these data groups on behalf of the Office of Elementary and Secondary Education.

Further information on EDFacts may be obtained from

EDFacts
Elementary/Secondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
https://www2.ed.gov/about/inits/ed/edfacts/index.html

High School Longitudinal Study of 2009

The High School Longitudinal Study of 2009 (HSLS:09) is a nationally representative, longitudinal study of approximately 21,000 9th-grade students in 944 schools who will be followed through their secondary and postsecondary years. The study focuses on understanding students’ trajectories from the beginning of high school into postsecondary education, the workforce, and beyond. The HSLS:09 questionnaire is focused on, but not limited to, information on science, technology, engineering, and mathematics (STEM) education and careers. It is designed to provide data on mathematics and science education, the changing high school environment, and postsecondary education. This study features a new student assessment in algebra skills, reasoning, and problem solving and includes surveys of students, their parents, math and science teachers, and school administrators, as well as a new survey of school counselors.

The HSLS:09 base year took place in the 2009–10 school year, with a randomly selected sample of fall-term 9th-graders in more than 900 public and private high schools that had both a 9th and an 11th grade. Students took a mathematics assessment and survey online. Students’ parents, principals, and mathematics and science teachers and the school’s lead counselor completed surveys on the phone or online.

The HSLS:09 student questionnaire includes interest and motivation items for measuring key factors predicting choice of postsecondary paths, including majors and eventual careers. This study explores the roles of different factors in the development of a student’s commitment to attend college and then take the steps necessary to succeed in college (the right courses, courses in specific sequences, etc.). Questionnaires in this study have asked questions of students and parents regarding reasons for selecting specific colleges (e.g., academic programs, financial aid and access prices, and campus environment).

The first follow-up of HSLS:09 occurred in the spring of 2012, when most sample members were in the 11th grade. Data files and documentation for the first follow-up were released in fall 2013 and are available on the NCES website.

A between-round postsecondary status update survey took place in the spring of students’ expected graduation year (2013). It asked respondents about college applications, acceptances, and rejections, as well as their actual college choices. In the fall of 2013 and the spring of 2014, high school transcripts were collected and coded.

A full second follow-up was conducted in 2016, when most sample members were 3 years beyond high school graduation. Additional follow-ups are planned, to at least age 30.

For more information on HSLS:09, contact:

Elise Christopher
Sample Surveys Division
Longitudinal Surveys Branch
National Center for Education Statistics
Potomac Center Plaza (PCP)
550 12th Street SW Washington, DC 20202
[email protected]
https://nces.ed.gov/surveys/hsls09

Integrated Postsecondary Education Data System

The Integrated Postsecondary Education Data System (IPEDS) surveys over 7,300 postsecondary institutions, including universities and colleges, as well as institutions offering technical and vocational education beyond the high school level. IPEDS, an annual universe collection that began in 1986, replaced the Higher Education General Information Survey (HEGIS).

IPEDS consists of interrelated survey components that provide information on postsecondary institutions, student enrollment, programs offered, degrees and certificates conferred, and both the human and financial resources involved in the provision of institutionally based postsecondary education. Prior to 2000, the IPEDS survey had the following subject-matter components: Graduation Rates; Fall Enrollment; Institutional Characteristics; Completions; Salaries, Tenure, and Fringe Benefits of Full-Time Faculty; Fall Staff; Finance; and Academic Libraries (in 2000, the Academic Libraries component became a survey separate from IPEDS). Since 2000, IPEDS survey components occurring in a particular collection year have been organized into three seasonal collection periods: fall, winter, and spring. The Institutional Characteristics and Completions components first took place during the fall 2000 collection; the Employees by Assigned Position (EAP), Salaries, and Fall Staff components first took place during the winter 2001–02 collection; and the Enrollment, Student Financial Aid, Finance, and Graduation Rates components first took place during the spring 2001 collection. In the winter 2005–06 data collection, the EAP, Fall Staff, and Salaries components were merged into the Human Resources component. During the 2007–08 collection year, the Enrollment component was broken into two separate components: 12-Month Enrollment (taking place in the fall collection) and Fall Enrollment (taking place in the spring collection). In the 2011–12 IPEDS data collection year, the Student Financial Aid component was moved to the winter data collection to aid in the timing of the net price of attendance calculations displayed on the College Navigator (https://nces.ed.gov/collegenavigator). In the 2012–13 IPEDS data collection year, the Human Resources component was moved from the winter data collection to the spring data collection, and in the 2013–14 data collection year, the Graduation Rates and Graduation Rates 200 Percent components were moved from the spring data collection to the winter data collection. In the 2014–15 data collection year, a new component (Admissions) was added to IPEDS and a former IPEDS component (Academic Libraries) was reintegrated into IPEDS. The Admissions component, created out of admissions data contained in the fall collection’s Institutional Characteristics component, was made a part of the winter collection. The Academic Libraries component, after having been conducted as a survey independent of IPEDS between 2000 and 2012, was reintegrated into IPEDS as part of the spring collection.

Beginning in 2008–09, the first-professional degree category was combined with the doctor’s degree category. However, some degrees formerly identified as first-professional that take more than 2 full-time-equivalent academic years to complete, such as those in Theology (M.Div, M.H.L./Rav), are included in the master’s degree category. Doctor’s degrees were broken out into three distinct categories: research/scholarship, professional practice, and other doctor’s degrees.

IPEDS race/ethnicity data collection also changed in 2008–09. The “Asian” race category is now separate from a “Native Hawaiian or Other Pacific Islander” category, and a new category of “Two or more races” has been added.

The degree-granting institutions portion of IPEDS is a census of colleges that award associate’s or higher degrees and are eligible to participate in Title IV financial aid programs. Prior to 1993, data from technical and vocational institutions were collected through a sample survey. Beginning in 1993, all data are gathered in a census of all postsecondary institutions. Beginning in 1997, the survey was restricted to institutions participating in Title IV programs.

The classification of institutions offering college and university education changed as of 1996. Prior to 1996, institutions that had courses leading to an associate’s or higher degree or that had courses accepted for credit toward those degrees were considered higher education institutions. Higher education institutions were accredited by an agency or association that was recognized by the U.S. Department of Education or were recognized directly by the Secretary of Education. The newer standard includes institutions that award associate’s or higher degrees and that are eligible to participate in Title IV federal financial aid programs. Tables that contain any data according to this standard are titled “degree-granting” institutions. Time-series tables may contain data from both series, and they are noted accordingly. The impact of this change on data collected in 1996 was not large. For example, tables on faculty salaries and benefits were only affected to a very small extent. Also, degrees awarded at the bachelor’s level or higher were not heavily affected. The largest impact was on private 2-year college enrollment. In contrast, most of the data on public 4-year colleges were affected to a minimal extent. The impact on enrollment in public 2-year colleges was noticeable in certain states, such as Arizona, Arkansas, Georgia, Louisiana, and Washington, but was relatively small at the national level. Overall, total enrollment for all institutions was about one-half of 1 percent higher in 1996 for degree-granting institutions than for higher education institutions.

Prior to the establishment of IPEDS in 1986, HEGIS acquired and maintained statistical data on the characteristics and operations of higher education institutions. Implemented in 1966, HEGIS was an annual universe survey of institutions accredited at the college level by an agency recognized by the Secretary of the U.S. Department of Education. These institutions were listed in NCES’s Education Directory, Colleges and Universities.

HEGIS surveys collected information on institutional characteristics, faculty salaries, finances, enrollment, and degrees. Since these surveys, like IPEDS, were distributed to all higher education institutions, the data presented are not subject to sampling error. However, they are subject to nonsampling error, the sources of which varied with the survey instrument.

The NCES Taskforce for IPEDS Redesign recognized that there were issues related to the consistency of data definitions as well as the accuracy, reliability, and validity of other quality measures within and across surveys. The IPEDS redesign in 2000 provided institution-specific web-based data forms. While the new system shortened data processing time and provided better data consistency, it did not address the accuracy of the data provided by institutions.

Beginning in 2003–04 with the Prior Year Data Revision System, prior-year data have been available to institutions entering current data. This allows institutions to make changes to their prior-year entries either by adjusting the data or by providing missing data. These revisions allow the evaluation of the data’s accuracy by looking at the changes made.

NCES conducted a study (NCES 2005-175) of the 2002–03 data that were revised in 2003–04 to determine the accuracy of the imputations, track the institutions that submitted revised data, and analyze the revised data they submitted. When institutions made changes to their data, it was assumed that the revised data were the “true” data. The data were analyzed for the number and type of institutions making changes, the type of changes, the magnitude of the changes, and the impact on published data.

Because NCES imputes for missing data, imputation procedures were also addressed by the Redesign Taskforce. For the 2003–04 assessment, differences between revised values and values that were imputed in the original files were compared (i.e., revised value minus imputed value). These differences were then used to provide an assessment of the effectiveness of imputation procedures. The size of the differences also provides an indication of the accuracy of imputation procedures. To assess the overall impact of changes on aggregate IPEDS estimates, published tables for each component were reconstructed using the revised 2002–03 data. These reconstructed tables were then compared to the published tables to determine the magnitude of aggregate bias and the direction of this bias.

Since the 2000–01 data collection year, IPEDS data collections have been web-based. Data have been provided by “keyholders,” institutional representatives appointed by campus chief executives, who are responsible for ensuring that survey data submitted by the institution are correct and complete. Because Title IV institutions are the primary focus of IPEDS and because these institutions are required to respond to IPEDS, response rates for Title IV institutions have been high (data on specific components are cited below). More details on the accuracy and reliability of IPEDS data can be found in the Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175).

Further information on IPEDS may be obtained from

Sam Barbett
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
https://nces.ed.gov/ipeds

Fall (Completions)

This survey was part of the HEGIS series throughout its existence. However, the degree classification taxonomy was revised in 1970–71, 1982–83, 1991–92, 2002–03, and 2009–10. Collection of degree data has been maintained through IPEDS.

The nonresponse rate does not appear to be a significant source of nonsampling error for this survey. The response rate over the years has been high; for the fall 2016 Completions component, it rounded to 100 percent. Because of the high response rate, there was no need to conduct a nonresponse bias analysis. Imputation methods for the fall 2016 IPEDS Completions component are discussed in the 2016–17 Integrated Postsecondary Education Data System (IPEDS) Methodology Report (NCES 2017-078).

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175) indicated that most Title IV institutions supplying revised data on completions in 2003–04 were able to supply missing data for the prior year. The small differences between imputed data for the prior year and the revised actual data supplied by the institution indicated that the imputed values produced by NCES were acceptable.

Further information on the IPEDS Completions component may be obtained from

Christopher Cody
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
https://nces.ed.gov/ipeds

Fall (Institutional Characteristics)

This survey collects the basic information necessary to classify institutions, including control, level, and types of programs offered, as well as information on tuition, fees, and room and board charges. Beginning in 2000, the survey collected institutional pricing data from institutions with first-time, full-time, degree/certificate-seeking undergraduate students. Unduplicated full-year enrollment counts and instructional activity are now collected in the 12-Month Enrollment survey. Beginning in 2008–09, the student financial aid data collected include greater detail. The overall unweighted response rate was 100.0 percent for Title IV degree-granting institutions for 2009 data.

In the fall 2016 data collection, the response rate for Title IV entities on the Institutional Characteristics component rounded to 100 percent: Of the 6,834 Title IV entities that were expected to respond, only 1 response was missing.

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175) looked at tuition and price in Title IV institutions. Only 8 percent of institutions in 2002–03 and 2003–04 reported the same data to IPEDS and Thomson Peterson—a company providing information about institutions based on the institutions’ voluntary data submissions—consistently across all selected data items. Differences in wordings or survey items may account for some of these inconsistencies.

Further information on the IPEDS Institutional Characteristics component may be obtained from

Moussa Ezzeddine
Christopher Cody
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
[email protected]
https://nces.ed.gov/ipeds

Winter (Graduation Rates and Graduation Rates 200 Percent)

In IPEDS data collection years 2012–13 and earlier, the Graduation Rates and Graduation Rates 200 Percent components were collected during the spring collection. In the IPEDS 2013–14 data collection year, however, the Graduation Rates and Graduation Rates 200 Percent collections were moved to the winter data collection.

The 2016–17 Graduation Rates component collected counts of full-time, first-time degree/certificate-seeking undergraduate students beginning their postsecondary education in the specified cohort year and their completion status as of 150 percent of normal program completion time at the same institution where the students started. If 150 percent of normal program completion time extended beyond August 31, 2016, the counts as of that date were collected. Four-year institutions used 2010 as the cohort year, while less-than-4-year institutions used 2013 as the cohort year. Of the 5,995 institutions that were expected to respond to the Graduation Rates component, responses were missing for 11 institutions, resulting in a response rate that rounded to 100 percent.

The 2016–17 Graduation Rates 200 Percent component was designed to combine information reported in a prior collection via the Graduation Rates component with current information about the same cohort of students. From previously collected data, the following counts were obtained: the number of students entering the institution as full-time, first-time degree/certificate-seeking students in a cohort year; the number of students in this cohort completing within 100 and 150 percent of normal program completion time; and the number of cohort exclusions (such as students who left for military service). Then the number of additional cohort exclusions and additional program completers between 151 and 200 percent of normal program completion time was collected. Four-year institutions reported on bachelor’s or equivalent degree-seeking students and used cohort year 2008 as the reference period, while less-than-4-year institutions reported on all students in the cohort and used cohort year 2012 as the reference period. Of the 5,594 institutions that were expected to respond to the Graduation Rates 200 Percent component, responses were missing for 10 institutions, resulting in a response rate that rounded to 100 percent.

Further information on the IPEDS Graduation Rates and Graduation Rates 200 Percent components may be obtained from

Andrew Mary
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
https://nces.ed.gov/ipeds/

Spring (Fall Enrollment)

This survey has been part of the HEGIS and IPEDS series since 1966. Response rates have been relatively high, generally exceeding 85 percent. Beginning in 2000, with web-based data collection, higher response rates were attained. In the spring 2017 data collection, the Fall Enrollment component covered fall 2016. Of the 6,742 institutions that were expected to respond, 6,734 provided data, for a response rate that rounded to 100 percent. Data collection procedures for the Fall Enrollment component of the spring 2017 data collection are presented in Enrollment and Employees in Postsecondary Institutions, Fall 2016; and Financial Statistics and Academic Libraries, Fiscal Year 2016: First Look (Provisional Data) (NCES 2018-002).

Beginning with the fall 1986 survey and the introduction of IPEDS (see above), the survey was redesigned. The survey allows (in alternating years) for the collection of age and residence data. Beginning in 2000, the survey collected instructional activity and unduplicated headcount data, which are needed to compute a standardized, full-time-equivalent (FTE) enrollment statistic for the entire academic year. As of 2007–08, the timeliness of the instructional activity data has been improved by collecting these data in the fall as part of the 12-Month Enrollment component instead of in the spring as part of the Fall Enrollment component.

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175) showed that public institutions made the majority of changes to enrollment data during the 2004 revision period. The majority of changes were made to unduplicated headcount data, with the net differences between the original data and the revised data being about 1 percent. Part-time students in general and enrollment in private not-for-profit institutions were often underestimated. The fewest changes by institutions were to Classification of Instructional Programs (CIP) code data. (The CIP is a taxonomic coding scheme that contains titles and descriptions of primarily postsecondary instructional programs.)

Further information on the IPEDS Fall Enrollment component may be obtained from

Aida Aliyeva
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
https://nces.ed.gov/ipeds

Spring (Finance)

This survey was part of the HEGIS series and has been continued under IPEDS. Substantial changes were made in the financial survey instruments in fiscal year (FY) 1976, FY 1982, FY 1987, FY 1997, and FY 2002. While these changes were significant, a considerable effort has been made to present only comparable information on trends and to note inconsistencies. The FY 1976 survey instrument contained numerous revisions to earlier survey forms, which made direct comparisons of line items very difficult. Beginning in FY 1982, Pell Grant data were collected in the categories of federal restricted grant and contract revenues and restricted scholarship and fellowship expenditures. The introduction of IPEDS in the FY 1987 survey included several important changes to the survey instrument and data processing procedures. Beginning in FY 1997, data for private institutions were collected using new financial concepts consistent with Financial Accounting Standards Board (FASB) reporting standards, which provide a more comprehensive view of college finance activities. The data for public institutions continued to be collected using the older survey form. The data for public and private institutions were no longer comparable and, as a result, no longer presented together in analysis tables. In FY 2001, public institutions had the option of either continuing to report using Government Accounting Standards Board (GASB) standards or using the new FASB reporting standards. Beginning in FY 2002, public institutions could use either the original GASB standards, the FASB standards, or the new GASB Statement 35 standards (GASB35).

Possible sources of nonsampling error in the financial statistics include nonresponse, imputation, and misclassification. The unweighted response rate has been about 85 to 90 percent for most years these data appeared in NCES reports; however, in more recent years, response rates have been much higher because Title IV institutions are required to respond. Since 2002, the IPEDS data collection has been a full-scale web-based collection, which has improved the quality and timeliness of the data. For example, the ability of IPEDS to tailor online data entry forms for each institution based on characteristics such as institutional control, level of institution, and calendar system and the institutions’ ability to submit their data online are aspects of full-scale web-based collections that have improved response.

The response rate for the FY 2016 Finance component was nearly 100 percent: Of the 6,825 institutions and administrative offices that were expected to respond, 6,816 provided data. Data collection procedures for the FY 2016 component are discussed in Enrollment and Employees in Postsecondary Institutions, Fall 2016; and Financial Statistics and Academic Libraries, Fiscal Year 2016: First Look (Provisional Data) (NCES 2018-002).

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175) found that only a small percentage (2.9 percent, or 168) of postsecondary institutions either revised 2002–03 data or submitted data for items they previously left unreported. Though relatively few institutions made changes, the changes made were relatively large—greater than 10 percent of the original data. With a few exceptions, these changes, large as they were, did not greatly affect the aggregate totals.

Further information on the IPEDS Finance component may be obtained from

Aida Aliyeva
Postsecondary Branch
Administrative Data Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
https://nces.ed.gov/ipeds

National Assessment of Educational Progress

The National Assessment of Educational Progress (NAEP) is a series of cross-sectional studies initially implemented in 1969 to assess the educational achievement of U.S. students and monitor changes in those achievements. In the main national NAEP, a nationally representative sample of students is assessed at grades 4, 8, and 12 in various academic subjects. The assessment is based on frameworks developed by the National Assessment Governing Board (NAGB). It includes both multiple-choice items and constructed-response items (those requiring written answers). Results are reported in two ways: by average score and by achievement level. Average scores are reported for the nation, for participating states and jurisdictions, and for subgroups of the population. Percentages of students performing at or above three achievement levels (Basic, Proficient, and Advanced) are also reported for these groups.

Main NAEP Assessments

From 1990 until 2001, main NAEP was conducted for states and other jurisdictions that chose to participate. In 2002, under the provisions of the No Child Left Behind Act of 2001, all states began to participate in main NAEP, and an aggregate of all state samples replaced the separate national sample. (School district-level assessments—under the Trial Urban District Assessment [TUDA] program—also began in 2002.)

Results are available for the mathematics assessments administered in 2000, 2003, 2005, 2007, 2009, 2011, 2013, 2015, and 2017. In 2005, NAGB called for the development of a new mathematics framework. The revisions made to the mathematics framework for the 2005 assessment were intended to reflect curricular emphases and better assess the specific objectives for students at each grade level.

The revised mathematics framework focuses on two dimensions: mathematical content and cognitive demand. By considering these two dimensions for each item in the assessment, the framework ensures that NAEP assesses an appropriate balance of content, as well as a variety of ways of knowing and doing mathematics.

Since the 2005 changes to the mathematics framework were minimal for grades 4 and 8, comparisons over time can be made between assessments conducted before and after the framework’s implementation for these grades. The changes that the 2005 framework made to the grade 12 assessment, however, were too drastic to allow grade 12 results from before and after implementation to be directly compared. These changes included adding more questions on algebra, data analysis, and probability to reflect changes in high school mathematics standards and coursework; merging the measurement and geometry content areas; and changing the reporting scale from 0–500 to 0–300. For more information regarding the 2005 mathematics framework revisions, see https://nces.ed.gov/nationsreportcard/mathematics/frameworkcomparison.asp.

Results are available for the reading assessments administered in 2000, 2002, 2003, 2005, 2007, 2009, 2011, 2013, 2015, and 2017. In 2009, a new framework was developed for the 4th-, 8th-, and 12th-grade NAEP reading assessments.

Both a content alignment study and a reading trend, or bridge, study were conducted to determine if the new reading assessment was comparable to the prior assessment. Overall, the results of the special analyses suggested that the assessments were similar in terms of their item and scale characteristics and the results they produced for important demographic groups of students. Thus, it was determined that the results of the 2009 reading assessment could still be compared to those from earlier assessment years, thereby maintaining the trend lines first established in 1992. For more information regarding the 2009 reading framework revisions, see https://nces.ed.gov/nationsreportcard/reading/whatmeasure.asp.

In spring 2013, NAEP released results from the NAEP 2012 economics assessment in The Nation’s Report Card: Economics 2012 (NCES 2013-453). First administered in 2006, the NAEP economics assessment measures 12th-graders’ understanding of a wide range of topics in three main content areas: market economy, national economy, and international economy. The 2012 assessment is based on a nationally representative sample of nearly 11,000 students in the 12th grade.

In The Nation’s Report Card: A First Look—2013 Mathematics and Reading (NCES 2014-451), NAEP released the results of the 2013 mathematics and reading assessments. Results can also be accessed using the interactive graphics and downloadable data available at the online Nation’s Report Card website (http://nationsreportcard.gov/reading_math_2013/#/).

The Nation’s Report Card: A First Look—2013 Mathematics and Reading Trial Urban District Assessment (NCES 2014-466) provides the results of the 2013 mathematics and reading TUDA, which measured the reading and mathematics progress of 4th- and 8th-graders from 21 urban school districts. Results from the 2013 mathematics and reading TUDA can also be accessed using the interactive graphics and downloadable data available at the online TUDA website (http://nationsreportcard.gov/reading_math_tuda_2013/#/).

The online interactive report The Nation’s Report Card: 2014 U.S. History, Geography, and Civics at Grade 8 (NCES 2015-112) provides grade 8 results for the 2014 NAEP U.S. history, geography, and civics assessments. Trend results for previous assessment years in these three subjects, as well as information on school and student participation rates and sample tasks and student responses, are also presented.

In 2014, the first administration of the NAEP Technology and Engineering Literacy (TEL) Assessment asked 8th-graders to respond to questions aimed at assessing their knowledge and skill in understanding technological principles, solving technology and engineering-related problems, and using technology to communicate and collaborate. The online report The Nation’s Report Card: Technology and Engineering Literacy (NCES 2016-119) presents national results for 8th-graders on the TEL assessment.

The Nation’s Report Card: 2015 Mathematics and Reading Assessments (NCES 2015-136) is an online interactive report that presents national and state results for 4th- and 8th-graders on the NAEP 2015 mathematics and reading assessments. The report also presents TUDA results in mathematics and reading for 4th- and 8th-graders. The online interactive report The Nation’s Report Card: 2015 Mathematics and Reading at Grade 12 (NCES 2016-018) presents grade 12 results from the NAEP 2015 mathematics and reading assessments.

Results from the 2015 NAEP science assessment are presented in the online report The Nation’s Report Card: 2015 Science at Grades 4, 8, and 12 (NCES 2016-162). The assessment measures the knowledge of 4th-, 8th-, and 12th-graders in the content areas of physical science, life science, and Earth and space sciences, as well as their understanding of four science practices (identifying science principles, using science principles, using scientific inquiry, and using technological design). National results are reported for grades 4, 8, and 12, and results from 46 participating states and one jurisdiction are reported for grades 4 and 8. Since a new NAEP science framework was introduced in 2009, results from the 2015 science assessment can be compared to results from the 2009 and 2011 science assessments, but cannot be compared to the science assessments conducted prior to 2009.

NAEP is in the process of transitioning from paper-based assessments to technology-based assessments; consequently, data are needed regarding students’ access to and familiarity with technology, at home and at school. The Computer Access and Familiarity Study (CAFS) is designed to fulfill this need. CAFS was conducted as part of the main administration of the 2015 NAEP. A subset of the grade 4, 8, and 12 students who took the main NAEP were chosen to take the additional CAFS questionnaire. The main 2015 NAEP was administered in a paper-and-pencil format to some students and a digital-based format to others, and CAFS participants were given questionnaires in the same format as their NAEP questionnaires.

The online Highlights report 2017 NAEP Mathematics and Reading Assessments: Highlighted Results at Grades 4 and 8 for the Nation, States, and Districts (NCES 2018-037) presents an overview of results from the NAEP 2017 mathematics and reading reports. Highlighted results include key findings for the nation, states/jurisdictions, and 27 districts that participated in the Trial Urban District Assessment (TUDA) in mathematics and reading at grades 4 and 8.

Further information on NAEP may be obtained from

Daniel McGrath
Reporting and Dissemination Branch
Assessments Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
https://nces.ed.gov/nationsreportcard

National Household Education Surveys Program

The National Household Education Surveys Program (NHES) is a data collection system that is designed to address a wide range of education-related issues. Surveys have been conducted in 1991, 1993, 1995, 1996, 1999, 2001, 2003, 2005, 2007, 2012, and 2016. NHES targets specific populations for detailed data collection. It is intended to provide more detailed data on the topics and populations of interest than are collected through supplements to other household surveys.

The topics addressed by NHES:1991 were early childhood education and adult education. About 60,000 households were screened for NHES:1991. In the Early Childhood Education Survey, about 14,000 parents/guardians of 3- to 8-year-olds completed interviews about their children’s early educational experiences. Included in this component were participation in nonparental care/education; care arrangements and school; and family, household, and child characteristics. In the NHES:1991 Adult Education Survey, about 9,800 people 16 years of age and over, identified as having participated in an adult education activity in the previous 12 months, were questioned about their activities. Data were collected on programs and up to four courses, including the subject matter, duration, sponsorship, purpose, and cost. Information on the household and the adult’s background and current employment was also collected.

In NHES:1993, nearly 64,000 households were screened. Approximately 11,000 parents of 3- to 7-year-olds completed interviews for the School Readiness Survey. Topics included the developmental characteristics of preschoolers; school adjustment and teacher feedback to parents for kindergartners and primary students; center-based program participation; early school experiences; home activities with family members; and health status.

In the School Safety and Discipline Survey, about 12,700 parents of children in grades 3 to 12 and about 6,500 youth in grades 6 to 12 were interviewed about their school experiences. Topics included the school learning environment, discipline policy, safety at school, victimization, the availability and use of alcohol/drugs, and alcohol/drug education. Peer norms for behavior in school and substance use were also included in this topical component. Extensive family and household background information was collected, as well as characteristics of the school attended by the child.

In NHES:1995, the Early Childhood Program Participation Survey and the Adult Education Survey were similar to those fielded in 1991. In the Early Childhood component, about 14,000 parents of children from birth to 3rd grade were interviewed out of 16,000 sampled, for a completion rate of 90.4 percent. In the Adult Education Survey, about 24,000 adults were sampled and 82.3 percent (20,000) completed the interview.

NHES:1996 covered parent and family involvement in education and civic involvement. Data on homeschooling and school choice also were collected. The 1996 survey screened about 56,000 households. For the Parent and Family Involvement in Education Survey, nearly 21,000 parents of children in grades 3 to 12 were interviewed. For the Civic Involvement Survey, about 8,000 youth in grades 6 to 12, about 9,000 parents, and about 2,000 adults were interviewed. The 1996 survey also addressed public library use. Adults in almost 55,000 households were interviewed to support state-level estimates of household public library use.

NHES:1999 collected end-of-decade estimates of key indicators from the surveys conducted throughout the 1990s. Approximately 60,000 households were screened for a total of about 31,000 interviews with parents of children from birth through grade 12 (including about 6,900 infants, toddlers, and preschoolers) and adults age 16 or older not enrolled in grade 12 or below. Key indicators included participation of children in nonparental care and early childhood programs, school experiences, parent/family involvement in education at home and at school, youth community service activities, plans for future education, and adult participation in educational activities and community service.

NHES:2001 included two surveys that were largely repeats of similar surveys included in earlier NHES collections. The Early Childhood Program Participation Survey was similar in content to the Early Childhood Program Participation Survey fielded as part of NHES:1995, and the Adult Education and Lifelong Learning Survey was similar in content to the Adult Education Survey of NHES:1995. The Before- and After-School Programs and Activities Survey, while containing items fielded in earlier NHES collections, had a number of new items that collected information about what school-age children were doing during the time they spent in child care or in other activities, what parents were looking for in care arrangements and activities, and parent evaluations of care arrangements and activities. Parents of approximately 6,700 children from birth through age 6 who were not yet in kindergarten completed Early Childhood Program Participation Survey interviews. Nearly 10,900 adults completed Adult Education and Lifelong Learning Survey interviews, and parents of nearly 9,600 children in kindergarten through grade 8 completed Before- and After-School Programs and Activities Survey interviews.

NHES:2003 included two surveys: the Parent and Family Involvement in Education Survey and the Adult Education for Work-Related Reasons Survey (the first administration). Whereas previous adult education surveys were more general in scope, this survey had a narrower focus on occupation-related adult education programs. It collected in-depth information about training and education in which adults participated specifically for work-related reasons, either to prepare for work or a career or to maintain or improve work-related skills and knowledge they already had. The Parent and Family Involvement Survey expanded on the first survey fielded on this topic in 1996. In 2003, screeners were completed with 32,050 households. About 12,700 of the 16,000 sampled adults completed the Adult Education for Work-Related Reasons Survey, for a weighted response rate of 76 percent. For the Parent and Family Involvement in Education Survey, interviews were completed by the parents of about 12,400 of the 14,900 sampled children in kindergarten through grade 12, yielding a weighted unit response rate of 83 percent.

NHES:2005 included surveys that covered adult education, early childhood program participation, and after-school programs and activities. Data were collected from about 8,900 adults for the Adult Education Survey, from parents of about 7,200 children for the Early Childhood Program Participation Survey, and from parents of nearly 11,700 children for the After-School Programs and Activities Survey. These surveys were substantially similar to the surveys conducted in 2001, with the exceptions that the Adult Education Survey addressed a new topic—informal learning activities for personal interest—and the Early Childhood Program Participation Survey and After-School Programs and Activities Survey did not collect information about before-school care for school-age children.

NHES:2007 fielded the Parent and Family Involvement in Education Survey and the School Readiness Survey. These surveys were similar in design and content to surveys included in the 2003 and 1993 collections, respectively. New features added to the Parent and Family Involvement Survey were questions about supplemental education services provided by schools and school districts (including use of and satisfaction with such services), as well as questions that would efficiently identify the school attended by the sampled students. New features added to the School Readiness Survey were questions that collected details about TV programs watched by the sampled children. For the Parent and Family Involvement Survey, interviews were completed with parents of 10,680 sampled children in kindergarten through grade 12, including 10,370 students enrolled in public or private schools and 310 homeschooled children. For the School Readiness Survey, interviews were completed with parents of 2,630 sampled children ages 3 to 6 and not yet in kindergarten. Parents who were interviewed about children in kindergarten through 2nd grade for the Parent and Family Involvement Survey were also asked some questions about these children’s school readiness.

The 2007 and earlier administrations of NHES used a random-digit-dial sample of landline phones and computer- assisted telephone interviewing to conduct interviews. However, due to declining response rates for all telephone surveys and the increase in households that only or mostly use a cell phone instead of a landline, the data collection method was changed to an address-based sample survey for NHES:2012. Because of this change in survey mode, readers should use caution when comparing NHES:2012 estimates to those of prior NHES administrations.

NHES:2012 included the Parent and Family Involvement in Education Survey and the Early Childhood Program Participation Survey. The Parent and Family Involvement in Education Survey gathered data on students age 20 or younger who were enrolled in kindergarten through grade 12 or who were homeschooled at equivalent grade levels. Survey questions that pertained to students enrolled in kindergarten through grade 12 requested information on various aspects of parent involvement in education (such as help with homework, family activities, and parent involvement at school) and survey questions pertaining to homeschooled students requested information on the student’s homeschooling experiences, the sources of the curriculum, and the reasons for homeschooling.

The 2012 Parent and Family Involvement in Education Survey questionnaires were completed for 17,563 (397 homeschooled and 17,166 enrolled) children, for a weighted unit response rate of 78.4 percent. The overall estimated unit response rate (the product of the screener unit response rate of 73.8 percent and the Parent and Family Involvement in Education Survey unit response rate) was 57.8 percent.

The 2012 Early Childhood Program Participation Survey collected data on the early care and education arrangements and early learning of children from birth through the age of 5 who were not yet enrolled in kindergarten. Questionnaires were completed for 7,893 children, for a weighted unit response rate of 78.7 percent. The overall estimated weighted unit response rate (the product of the screener weighted unit response rate of 73.8 percent and the Early Childhood Program Participation Survey unit weighted response rate) was 58.1 percent.

NHES:2016 used a nationally representative address-based sample covering the 50 states and the District of Columbia. The 2016 administration of NHES included a screener survey and three topical surveys: The Parent and Family Involvement in Education Survey, the Early Childhood Program Participation Survey, and the Adult Training and Education Survey. The screener survey questionnaire identified households with children or youth under age 20 and adults ages 16 to 65. A total of 206,000 households were selected based on this screener, and the screener response rate was 66.4 percent. All sampled households received initial contact by mail. Although the majority of respondents completed paper questionnaires, a small sample of cases was part of a web experiment with mailed invitations to complete the survey online.

The 2016 Parent and Family Involvement in Education Survey, like its predecessor in 2012, gathered data about students age 20 or under who were enrolled in kindergarten through grade 12 or who were being homeschooled for the equivalent grades. The 2016 survey’s questions also covered aspects of parental involvement in education similar to those in the 2012 survey. The total number of completed questionnaires in the 2016 survey was 14,075 (13,523 enrolled and 552 homeschooled children), representing a population of 53.2 million students either homeschooled or enrolled in a public or private school in 2015–16. The survey’s weighted unit response rate was 74.3 percent, and the overall response rate was 49.3 percent.

The 2016 Early Childhood Program Participation Survey collected data about children from birth through age 6 who were not yet enrolled in kindergarten. The survey asked about children’s participation in relative care, nonrelative care, and center-based care arrangements. It also requested information such as the main reason for choosing care, factors that were important to parents when choosing a care arrangement, the primary barriers to finding satisfactory care, activities the family does with the child, and what the child is learning. Questionnaires were completed for 5,844 children, for a weighted unit response rate of 73.4 percent and an overall estimated weighted unit response rate of 48.7 percent.

The third topical survey of NHES:2016 was a new NHES survey, the Adult Training and Education Survey. The survey collected information from noninstitutionalized adults ages 16 to 65 not enrolled in high school—it also collected information from adults living at residential addresses associated with educational institutions such as colleges (thus, it collected information from enrolled college students). One of the main goals of the Adult Training and Education Survey is to capture the prevalence of nondegree credentials, including estimates of adults with occupational certifications or licenses, as well as to capture the prevalence of postsecondary educational certificates. A further goal is to learn more about work experience programs. The survey’s data, when weighted, were nationally representative of noninstitutionalized adults ages 16 to 65, not enrolled in grades 12 or below. The total number of completed questionnaires was 47,744, representing a population of 196.3 million. The survey had a weighted response rate of 73.1 percent and an overall response rate of 48.5 percent.

Data for the three topical surveys in the 2016 administration of NHES are available in Parent and Family Involvement in Education: Results From the National Household Education Surveys Program of 2016 (NCES 2017-102); Early Childhood Program Participation, Results From the National Household Education Surveys Program of 2016 (NCES 2017-101); and Adult Training and Education: Results From the National Household Education Surveys Program of 2016 (NCES 2017-103rev).

Further information on NHES may be obtained from

Sarah Grady
Andrew Zukerberg
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
[email protected]
https://nces.ed.gov/nhes

National Postsecondary Student Aid Study

The National Postsecondary Student Aid Study (NPSAS) is a comprehensive nationwide study of how students and their families pay for postsecondary education. Data gathered from the study are used to help guide future federal student financial aid policy. The study covers nationally representative samples of undergraduates, graduates, and first-professional students in the 50 states, the District of Columbia, and Puerto Rico, including students attending less-than-2-year institutions, community colleges, and 4-year colleges and universities. Participants include students who do not receive aid and those who do receive financial aid. Since NPSAS identifies nationally representative samples of student subpopulations of interest to policymakers and obtains baseline data for longitudinal study of these subpopulations, data from the study provide the base-year sample for the Beginning Postsecondary Students (BPS) longitudinal study and the Baccalaureate and Beyond (B&B) longitudinal study.

Originally, NPSAS was conducted every 3 years. Beginning with the 1999–2000 study (NPSAS:2000), NPSAS has been conducted every 4 years. NPSAS:08 included a new set of instrument items to obtain baseline measures of the awareness of two new federal grants introduced in 2006: the Academic Competitiveness Grant (ACG) and the National Science and Mathematics Access to Retain Talent (SMART) grant.

The first NPSAS (NPSAS:87) was conducted during the 1986–87 school year. Data were gathered from about 1,100 colleges, universities, and other postsecondary institutions; 60,000 students; and 14,000 parents. These data provided information on the cost of postsecondary education, the distribution of financial aid, and the characteristics of both aided and nonaided students and their families.

For NPSAS:93, information on 77,000 undergraduates and graduate students enrolled during the school year was collected at 1,000 postsecondary institutions. The sample included students who were enrolled at any time between July 1, 1992, and June 30, 1993. About 66,000 students and a subsample of their parents were interviewed by telephone. NPSAS:96 contained information on more than 48,000 undergraduate and graduate students from about 1,000 postsecondary institutions who were enrolled at any time during the 1995–96 school year. NPSAS:2000 included nearly 62,000 students (50,000 undergraduates and almost 12,000 graduate students) from 1,000 postsecondary institutions. NPSAS:04 collected data on about 80,000 undergraduates and 11,000 graduate students from 1,400 postsecondary institutions. For NPSAS:08, about 114,000 undergraduate students and 14,000 graduate students who were enrolled in postsecondary education during the 2007–08 school year were selected from more than 1,730 postsecondary institutions.

NPSAS:12 sampled about 95,000 undergraduates and 16,000 graduate students from approximately 1,500 postsecondary institutions. Public access to the data is available online through PowerStats (https://nces.ed.gov/datalab/).

NPSAS:16 sampled about 89,000 undergraduate and 24,000 graduate students attending approximately 1,800 Title IV eligible postsecondary institutions in the 50 states, the District of Columbia, and Puerto Rico. The sample represents approximately 20 million undergraduate and 4 million graduate students enrolled in postsecondary education at Title IV eligible institutions at any time between July 1, 2015, and June 30, 2016.

Further information on NPSAS may be obtained from

Aurora D’Amico
Tracy Hunt-White
Longitudinal Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
[email protected]
https://nces.ed.gov/npsas

National Teacher and Principal Survey

The National Teacher and Principal Survey (NTPS) is a set of related questionnaires that collect descriptive data on the context of elementary and secondary education. Data reported by schools, principals, and teachers provide a variety of statistics on the condition of education in the United States that may be used by policymakers and the general public. The NTPS system covers a wide range of topics, including teacher demand, teacher and principal characteristics, teachers’ and principals’ perceptions of school climate and problems in their schools, teacher and principal compensation, district hiring and retention practices, general conditions in schools, and basic characteristics of the student population.

The NTPS was first conducted during the 2015–16 school year. The survey is a redesign of the Schools and Staffing Survey (SASS), which was conducted from the 1987–88 school year to the 2011–12 school year. Although the NTPS maintains the SASS survey’s focus on schools, teachers, and administrators, the NTPS has a different structure and sample than SASS. In addition, whereas SASS operated on a 4-year survey cycle, the NTPS operates on a 2-year survey cycle.

The school sample for the 2015–16 NTPS was based on an adjusted public school universe file from the 2013–14 Common Core of Data (CCD), a database of all the nation’s public school districts and public schools. The NTPS definition of a school is the same as the SASS definition of a school—an institution or part of an institution that provides classroom instruction to students, has one or more teachers to provide instruction, serves students in one or more of grades 1–12 or the ungraded equivalent, and is located in one or more buildings apart from a private home.

The 2015–16 NTPS universe of schools is confined to the 50 states plus the District of Columbia. It excludes the Department of Defense dependents schools overseas, schools in U.S. territories overseas, and CCD schools that do not offer teacher-provided classroom instruction in grades 1–12 or the ungraded equivalent. Bureau of Indian Education schools are included in the NTPS universe, but these schools were not oversampled and the data do not support separate BIE estimates.

The NTPS includes three key components: school questionnaires, principal questionnaires, and teacher questionnaires. NTPS data are collected by the U.S. Census Bureau through a mail questionnaire with telephone and in-person field follow-up. The school and principal questionnaires were sent to sampled schools, and the teacher questionnaire was sent to a sample of teachers working at sampled schools. The NTPS school sample consisted of about 8,300 public schools; the principal sample consisted of about 8,300 public school principals; and the teacher sample consisted of about 40,000 public school teachers.

The school questionnaire asks knowledgeable school staff members about grades offered, student attendance and enrollment, staffing patterns, teaching vacancies, programs and services offered, curriculum, and community service requirements. In addition, basic information is collected about the school year, including the beginning time of students’ school days and the length of the school year. The weighted unit response rate for the 2015–16 school survey was 72.5 percent.

The principal questionnaire collects information about principal/school head demographic characteristics, training, experience, salary, goals for the school, and judgments about school working conditions and climate. Information is also obtained on professional development opportunities for teachers and principals, teacher performance, barriers to dismissal of underperforming teachers, school climate and safety, parent/guardian participation in school events, and attitudes about educational goals and school governance. The weighted unit response rate for the 2015–16 principal survey was 71.8 percent.

The teacher questionnaire collects data from teachers about their current teaching assignment, workload, education history, and perceptions and attitudes about teaching. Questions are also asked about teacher preparation, induction, organization of classes, computers, and professional development. The weighted response rate for the 2015–16 teacher survey was 67.8 percent.

Further information about the NTPS is available in User’s Manual for the 2015–16 National Teacher and Principal Survey, Volumes 1–4 (NCES 2017-131 through NCES 2017-134).

For additional information about the NTPS program, please contact

Maura Spiegelman
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
https://nces.ed.gov/surveys/ntps

Private School Universe Survey

The purposes of the Private School Universe Survey (PSS) data collection activities are (1) to build an accurate and complete list of private schools to serve as a sampling frame for NCES sample surveys of private schools and (2) to report data on the total number of private schools, teachers, and students in the survey universe. Begun in 1989, the PSS has been conducted every 2 years, and data for the 1989–90, 1991–92, 1993–94, 1995–96, 1997–98, 1999–2000, 2001–02, 2003–04, 2005–06, 2007–08, 2009–10, 2011–12, 2013–14, and 2015–16 school years have been released. The First Look report Characteristics of Private Schools in the United States: Results From the 2015–16 Private School Universe Survey (NCES 2017-073) presents selected findings from the 2015–16 PSS.

The PSS produces data similar to that of the Common Core of Data for public schools, and can be used for public-private comparisons. The data are useful for a variety of policy- and research-relevant issues, such as the growth of religiously affiliated schools, the number of private high school graduates, the length of the school year for various private schools, and the number of private school students and teachers.

The target population for this universe survey is all private schools in the United States that meet the PSS criteria of a private school (i.e., the private school is an institution that provides instruction for any of grades K through 12, has one or more teachers to give instruction, is not administered by a public agency, and is not operated in a private home).

The survey universe is composed of schools identified from a variety of sources. The main source is a list frame initially developed for the 1989–90 PSS. The list is updated regularly by matching it with lists provided by nationwide private school associations, state departments of education, and other national guides and sources that list private schools. The other source is an area frame search in approximately 124 geographic areas, conducted by the U.S. Census Bureau.

Of the 40,302 schools included in the 2009–10 sample, 10,229 were found ineligible for the survey. Those not responding numbered 1,856, and those responding numbered 28,217. The unweighted response rate for the 2009–10 PSS survey was 93.8 percent.

Of the 39,325 schools included in the 2011–12 sample, 10,030 cases were considered as out-of-scope (not eligible for the PSS). A total of 26,983 private schools completed a PSS interview (15.8 percent completed online), while 2,312 schools refused to participate, resulting in an unweighted response rate of 92.1 percent.

There were 40,298 schools in the 2013–14 sample; of these, 10,659 were considered as out-of-scope (not eligible for the PSS). A total of 24,566 private schools completed a PSS interview (34.1 percent completed online), while 5,073 schools refused to participate, resulting in an unweighted response rate of 82.9 percent.

The 2015–16 PSS included 42,389 schools, of which 12,754 were considered as out-of-scope (not eligible for the PSS). A total of 22,428 private schools completed a PSS interview and 7,207 schools failed to respond, which resulted in an unweighted response rate of 75.7 percent.

Further information on the PSS may be obtained from

Steve Broughman
Cross-Sectional Surveys Branch
Sample Surveys Division
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
https://nces.ed.gov/surveys/pss

Projections of Education Statistics

Since 1964, NCES has published projections of key statistics for elementary and secondary schools and higher education institutions. The latest report is Projections of Education Statistics to 2026 (NCES 2018-019). The Projections of Education Statistics series uses projection models for elementary and secondary enrollment, high school graduates, elementary and secondary teachers, expenditures for public elementary and secondary education, enrollment in postsecondary degree- granting institutions, and postsecondary degrees conferred to develop national and state projections. These models are described more fully in the report’s appendix on projection methodology.

Differences between the reported and projected values are, of course, almost inevitable. An evaluation of past projections revealed that, at the elementary and secondary level, projections of public school enrollments have been quite accurate: mean absolute percentage differences for enrollment in public schools ranged from 0.3 to 1.2 percent for projections from 1 to 5 years in the future, while those for teachers in public schools were 3.1 percent or less. At the higher education level, projections of enrollment have been fairly accurate: mean absolute percentage differences were 5.9 percent or less for projections from 1 to 5 years into the future.

Further information on Projections of Education Statistics may be obtained from

William Hussar
Annual Reports and Information Staff
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]
https://nces.ed.gov/pubs2018/2018019.pdf

Other Department of Education Agencies

Office for Civil Rights

Civil Rights Data Collection

The U.S. Department of Education’s Office for Civil Rights (OCR) has surveyed the nation’s public elementary and secondary schools since 1968. The survey was first known as the OCR Elementary and Secondary School (E&S) Survey; in 2004, it was renamed the Civil Rights Data Collection (CRDC). The survey collects data on school discipline, access to and participation in high-level mathematics and science courses, teacher characteristics, school finances, and other school characteristics. These data are reported by race/ethnicity, sex, and disability.

Data in the survey are collected pursuant to 34 C.F.R. Section 100.6(b) of the Department of Education regulation implementing Title VI of the Civil Rights Act of 1964. The requirements are also incorporated by reference in Department regulations implementing Title IX of the Education Amendments of 1972, Section 504 of the Rehabilitation Act of 1973, and the Age Discrimination Act of 1975. School, district, state, and national data are currently available. Data from individual public schools and districts are used to generate national and state data.

The CRDC has generally been conducted biennially in each of the 50 states plus the District of Columbia. The 2009–10 CRDC was collected from a sample of approximately 7,000 school districts and over 72,000 schools in those districts. It was made up of two parts: part 1 contained beginning-of-year “snapshot” data and part 2 contained cumulative, or end-of- year, data.

The 2011–12 CRDC survey, which collected data from approximately 16,500 school districts and 97,000 schools, was the first CRDC survey since 2000 that included data from every public school district and school in the nation. The 2013–14 CRDC survey also collected information from a universe of every public school district and school in the nation.

Further information on the Civil Rights Data Collection may be obtained from

Office for Civil Rights
U.S. Department of Education
400 Maryland Avenue SW
Washington, DC 20202
[email protected]
https://www.ed.gov/about/offices/list/ocr/data.html

Office of Special Education Programs

Annual Report to Congress on the Implementation of the Individuals with Disabilities Education Act

The Individuals with Disabilities Education Act (IDEA) is a law ensuring services to children with disabilities throughout the nation. IDEA governs how states and public agencies provide early intervention, special education, and related services to more than 6.8 million eligible infants, toddlers, children, and youth with disabilities.

IDEA, formerly the Education of the Handicapped Act (EHA), requires the Secretary of Education to transmit, on an annual basis, a report to Congress describing the progress made in serving the nation’s children with disabilities. This annual report contains information on children served by public schools under the provisions of Part B of IDEA and on children served in state-operated programs for persons with disabilities under Chapter I of the Elementary and Secondary Education Act.

Statistics on children receiving special education and related services in various settings, and school personnel providing such services, are reported in an annual submission of data to the Office of Special Education Programs (OSEP) by the 50 states, the District of Columbia, the Bureau of Indian Education schools, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, the U.S. Virgin Islands, the Federated States of Micronesia, the Republic of Palau, and the Republic of the Marshall Islands. The child count information is based on the number of children with disabilities receiving special education and related services on December 1 of each year. Count information is available from http://www.ideadata.org.

Since all participants in programs for persons with disabilities are reported to OSEP, the data are not subject to sampling error. However, nonsampling error can arise from a variety of sources. Some states only produce counts of students receiving special education services by disability category because Part B of the EHA requires it. In those states that typically produce counts of students receiving special education services by disability category without regard to EHA requirements, definitions and labeling practices vary.

Further information on this annual report to Congress may be obtained from

Office of Special Education Programs
Office of Special Education and Rehabilitative Services
U.S. Department of Education
400 Maryland Avenue SW
Washington, DC 20202-7100
https://www.ed.gov/about/reports/annual/osep/index.html
https://sites.ed.gov/idea/
https://www.ideadata.org

Other Governmental Agencies and Programs

Centers for Disease Control and Prevention

Youth Risk Behavior Surveillance System

The Youth Risk Behavior Surveillance System (YRBSS) is an epidemiological surveillance system developed by the Centers for Disease Control and Prevention (CDC) to monitor the prevalence of youth behaviors that most influence health. The YRBSS focuses on priority health-risk behaviors established during youth that result in the most significant mortality, morbidity, disability, and social problems during both youth and adulthood. The YRBSS includes a national school-based Youth Risk Behavior Survey (YRBS), as well as surveys conducted in states and large urban school districts.

The national YRBS uses a three-stage cluster sampling design to produce a nationally representative sample of students in grades 9–12 in the United States. The target population consists of all public and private school students in grades 9–12 in the 50 states and the District of Columbia. The first-stage sampling frame includes selecting primary sampling units (PSUs) from strata formed on the basis of urbanization and the relative percentage of Black and Hispanic students in the PSU. These PSUs are either counties; subareas of large counties; or groups of smaller, adjacent counties. At the second stage, schools were selected with probability proportional to school enrollment size.

The final stage of sampling consists of randomly selecting, in each chosen school and in each of grades 9–12, one or two classrooms from either a required subject, such as English or social studies, or a required period, such as homeroom or second period. All students in selected classes are eligible to participate. In surveys conducted before 2013, three strategies were used to oversample Black and Hispanic students: (1) larger sampling rates were used to select PSUs that are in high-Black and high-Hispanic strata; (2) a modified measure of size was used that increased the probability of selecting schools with a disproportionately high minority enrollment; and (3) two classes per grade, rather than one, were selected in schools with a high percentage of combined Black, Hispanic, Asian/Pacific Islander, or American Indian/Alaska Native enrollment. In 2013, only selection of two classes per grade was needed to achieve an adequate precision with minimum variance. Approximately 16,300 students participated in the 1993 survey, 10,900 students participated in the 1995 survey, 16,300 students participated in the 1997 survey, 15,300 students participated in 1999, 13,600 students participated in 2001, 15,200 students participated in 2003, 13,900 participated in 2005, 14,000 participated in 2007, 16,400 participated in 2009, 15,400 participated in 2011, 13,600 participated in 2013, and 15,600 participated in 2015.

The overall response rate was 70 percent for the 1993 survey, 60 percent for the 1995 survey, 69 percent for the 1997 survey, 66 percent in 1999, 63 percent in 2001, 67 percent in 2003, 67 percent in 2005, 68 percent in 2007, 71 percent in 2009, 71 percent in 2011, 68 percent in 2013, and 60 percent in 2015. NCES standards call for response rates of 85 percent or greater for cross-sectional surveys, and bias analyses are required by NCES when that percentage is not achieved. For YRBS data, a full nonresponse bias analysis has not been done because the data necessary to do the analysis are not available. The weights were developed to adjust for nonresponse and the oversampling of Black and Hispanic students in the sample. The final weights were constructed so that only weighted proportions of students (not weighted counts of students) in each grade matched national population projections.

State-level data were downloaded from the Youth Online: Comprehensive Results web page (https://nccd.cdc.gov/Youthonline/App/Default.aspx). Each state and district school-based YRBS employs a two-stage, cluster sample design to produce representative samples of students in grades 9–12 in their jurisdiction. All except a few state samples, and all district samples, include only public schools, and each district sample includes only schools in the funded school district (e.g., San Diego Unified School District) rather than in the entire city (e.g., greater San Diego area).

In the first sampling stage in all except a few states and districts, schools are selected with probability proportional to school enrollment size. In the second sampling stage, intact classes of a required subject or intact classes during a required period (e.g., second period) are selected randomly. All students in sampled classes are eligible to participate. Certain states and districts modify these procedures to meet their individual needs. For example, in a given state or district, all schools, rather than a sample of schools, might be selected to participate. State and local surveys that have a scientifically selected sample, appropriate documentation, and an overall response rate greater than or equal to 60 percent are weighted. The overall response rate reflects the school response rate multiplied by the student response rate. These three criteria are used to ensure that the data from those surveys can be considered representative of students in grades 9–12 in that jurisdiction. A weight is applied to each record to adjust for student nonresponse and the distribution of students by grade, sex, and race/ethnicity in each jurisdiction. Therefore, weighted estimates are representative of all students in grades 9–12 attending schools in each jurisdiction. Surveys that do not have an overall response rate of greater than or equal to 60 percent and that do not have appropriate documentation are not weighted and are not included in this report.

For the 2015 YRBS, data from 37 states and 19 large urban districts were weighted. (For information on the location of the districts, please see https://www.cdc.gov/healthyyouth/data/yrbs/participation.htm.) In 36 states and all large urban school districts, weighted estimates are representative of all students in grades 9–12 attending public schools in each jurisdiction. In one state (South Dakota), weighted estimates are representative of all students in grades 9–12 attending public and private schools. Student sample sizes ranged from 1,313 to 55,596 across the states and from 1,052 to 10,419 across the large urban school districts. Among the states, school response rates ranged from 70 percent to 100 percent, student response rates ranged from 64 percent to 90 percent, and overall response rates ranged from 60 percent to 84 percent. Among the large urban school districts, school response rates ranged from 90 percent to 100 percent, student response rates ranged from 66 percent to 88 percent, and overall response rates ranged from 64 percent to 88 percent.

In 2013, a total of 42 states and 21 districts had weighted data. Not all of the districts were contained in the 42 states. For example, California was not one of the 42 states that obtained weighted data, but it contained several districts that did. In sites with weighted data, the student sample sizes for the state and district YRBS ranged from 1,107 to 53,785. School response rates ranged from 70 to 100 percent, student response rates ranged from 60 to 94 percent, and overall response rates ranged from 60 to 87 percent.

Readers should note that reports of these data published by the CDC and in this report do not include percentages for which the denominator includes fewer than 100 unweighted cases.

In 1999, in accordance with changes to the Office of Management and Budget’s standards for the classification of federal data on race and ethnicity, the YRBS item on race/ethnicity was modified. The version of the race and ethnicity question used in 1993, 1995, and 1997 was

How do you describe yourself?

  1. White—not Hispanic
  2. Black—not Hispanic
  3. Hispanic or Latino
  4. Asian or Pacific Islander
  5. American Indian or Alaskan Native
  6. Other

The version used in 1999, 2001, 2003, and in the 2005, 2007, and 2009 state and local district surveys was How do you describe yourself? (Select one or more responses.)

  1. American Indian or Alaska Native
  2. Asian
  3. Black or African American
  4. Hispanic or Latino
  5. Native Hawaiian or Other Pacific Islander
  6. White

In the 2005 national survey and in all 2007, 2009, 2011, 2013, and 2015 surveys, race/ethnicity was computed from two questions: (1) “Are you Hispanic or Latino?” (response options were “Yes” and “No”), and (2) “What is your race?” (response options were “American Indian or Alaska Native,” “Asian,” “Black or African American,” “Native Hawaiian or Other Pacific Islander,” or “White”). For the second question, students could select more than one response option. For this report, students were classified as “Hispanic” if they answered “Yes” to the first question, regardless of how they answered the second question. Students who answered “No” to the first question and selected more than one race/ethnicity in the second category were classified as “More than one race.” Students who answered “No” to the first question and selected only one race/ethnicity were classified as that race/ethnicity. Race/ethnicity was classified as missing for students who did not answer the first question and for students who answered “No” to the first question but did not answer the second question.

CDC has conducted two studies to understand the effect of changing the race/ethnicity item on the YRBS. Brener, Kann, and McManus (Public Opinion Quarterly, 67:227–226, 2003) found that allowing students to select more than one response to a single race/ethnicity question on the YRBS had only a minimal effect on reported race/ethnicity among high school students. Eaton, Brener, Kann, and Pittman (Journal of Adolescent Health, 41: 488–494, 2007) found that self- reported race/ethnicity was similar regardless of whether the single-question or a two-question format was used.

Further information on the YRBSS may be obtained from

Laura Kann
Division of Adolescent and School Health
National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention Centers for Disease Control and Prevention
Mailstop E-75
1600 Clifton Road NE
Atlanta, GA 30329-4027
(404) 718-8132
[email protected]
www.cdc.gov/info
http://www.cdc.gov/yrbs

Census Bureau

American Community Survey

The Census Bureau introduced the American Community Survey (ACS) in 1996. Fully implemented in 2005, it provides a large monthly sample of demographic, socioeconomic, and housing data comparable in content to the Long Forms of the Decennial Census up to and including the 2000 long form. Aggregated over time, these data serve as a replacement for the Long Form of the Decennial Census. The survey includes questions mandated by federal law, federal regulations, and court decisions.

Since 2011, the survey has been mailed to approximately 295,000 addresses in the United States and Puerto Rico each month, or about 3.5 million addresses annually. A larger proportion of addresses in small governmental units (e.g., American Indian reservations, small counties, and towns) also receive the survey. The monthly sample size is designed to approximate the ratio used in the 2000 Census, which requires more intensive distribution in these areas. The ACS covers the U.S. resident population, which includes the entire civilian, noninstitutionalized population; incarcerated persons; institutionalized persons; and the active duty military who are in the United States. In 2006, the ACS began interviewing residents in group quarter facilities. Institutionalized group quarters include adult and juvenile correctional facilities, nursing facilities, and other health care facilities. Noninstitutionalized group quarters include college and university housing, military barracks, and other noninstitutional facilities such as workers and religious group quarters and temporary shelters for the homeless.

National-level data from the ACS are available from 2000 onward. The ACS produces 1-year estimates for jurisdictions with populations of 65,000 and over and 5-year estimates for jurisdictions with smaller populations. The 1-year estimates for 2016 used data collected between January 1, 2016, and December 31, 2016, and the 5-year estimates for 2012–2016 used data collected between January 1, 2012, and December 31, 2016. The ACS produced 3-year estimates (for jurisdictions with populations of 20,000 or over) for the periods 2005–2007, 2006–2008, 2007–2009, 2008–2010, 2009–2011, 2010–2012, and 2011–2013. Three-year estimates for these periods will continue to be available to data users, but no further 3-year estimates will be produced.

Further information about the ACS is available at http://www.census.gov/acs/www/.

Census of Population—Education in the United States

Some NCES tables are based on a part of the decennial census that consisted of questions asked of a 1 in 6 sample of people and housing units in the United States. This sample was asked more detailed questions about income, occupation, and housing costs, as well as questions about general demographic information. This decennial census “long form” has been discontinued and has been replaced by the American Community Survey (ACS).

School enrollment. People classified as enrolled in school reported attending a “regular” public or private school or college. They were asked whether the institution they attended was public or private and what level of school they were enrolled in.

Educational attainment. Data for educational attainment were tabulated for people ages 15 and over and classified according to the highest grade completed or the highest degree received. Instructions were also given to include the level of the previous grade attended or the highest degree received for people currently enrolled in school.

Poverty status. To determine poverty status, answers to income questions were used to make comparisons to the appropriate poverty threshold. All people except those who were institutionalized, people in military group quarters and college dormitories, and unrelated people under age 15 were considered. If the total income of each family or unrelated individual in the sample was below the corresponding cutoff, that family or individual was classified as “below the poverty level.”

Further information on the 1990 and 2000 Census of Population may be obtained from

Population Division Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
http://www.census.gov/main/www/cen1990.html
http://www.census.gov/main/www/cen2000.html

Current Population Survey

The Current Population Survey (CPS) is a monthly survey of about 54,000 households conducted by the U.S. Census Bureau for the Bureau of Labor Statistics. The CPS is the primary source of labor force statistics on the U.S. population. In addition, supplemental questionnaires are used to provide further information about the U.S.

population. The March supplement (also known as the Annual Social and Economic [ASEC] supplement) contains detailed questions on topics such as income, employment, and educational attainment; additional questions, such as items on disabilities, have also been included. In the July supplement, items on computer and internet use are the principal focus. The October supplement also contains some questions about computer and internet use, but most of its questions relate to school enrollment and school characteristics.

CPS samples are initially selected based on results from the decennial census and are periodically updated to reflect new housing construction. The current sample design for the main CPS, last revised in July 2015, includes about 74,000 households. Each month, about 54,000 of the 74,000 households are interviewed. Information is obtained each month from those in the household who are 15 years of age and over, and demographic data are collected for children 0–14 years of age. In addition, supplemental questions regarding school enrollment are asked about eligible household members age 3 and over in the October CPS supplement.

In January 1992, the CPS educational attainment variable was changed. The “Highest grade attended” and “Year completed” questions were replaced by the question “What is the highest level of school . . . has completed or the highest degree . . . has received?” Thus, for example, while the old questions elicited data for those who completed more than 4 years of high school, the new question elicited data for those who were high school completers, i.e., those who graduated from high school with a diploma as well as those who completed high school through equivalency programs, such as a GED program.

A major redesign of the CPS was implemented in January 1994 to improve the quality of the data collected. Survey questions were revised, new questions were added, and computer-assisted interviewing methods were used for the survey data collection. Further information about the redesign is available in Current Population Survey, October 1995: (School Enrollment Supplement) Technical Documentation at http://www.census.gov/prod/techdoc/cps/cpsoct95.pdf.

Caution should be used when comparing data from 1994 through 2001 with data from 1993 and earlier. Data from 1994 through 2001 reflect 1990 census-based population controls, while data from 1993 and earlier reflect 1980 or earlier census-based population controls. Changes in population controls generally have relatively little impact on summary measures such as means, medians, and percentage distributions; they can, however, have a significant impact on population counts. For example, use of the 1990 census-based population controls resulted in about a 1 percent increase in the civilian noninstitutional population and in the number of families and households. Thus, estimates of levels for data collected in 1994 and later years will differ from those for earlier years by more than what could be attributed to actual changes in the population. These differences could be disproportionately greater for certain subpopulation groups than for the total population.

Beginning in 2003, the race/ethnicity questions were expanded. Information on people of Two or more races were included, and the Asian and Pacific Islander race category was split into two categories—Asian and Native Hawaiian or Other Pacific Islander. In addition, questions were reworded to make it clear that self-reported data on race/ethnicity should reflect the race/ethnicity with which the responder identifies, rather than what may be written in official documentation.

The estimation procedure employed for monthly CPS data involves inflating weighted sample results to independent estimates of characteristics of the civilian noninstitutional population in the United States by age, sex, and race. These independent estimates are based on statistics from decennial censuses; statistics on births, deaths, immigration, and emigration; and statistics on the population in the armed services. Generalized standard error tables are provided in the Current Population Reports; methods for deriving standard errors can be found within the CPS technical documentation at http://www.census.gov/programs-surveys/cps/technical-documentation/complete.html. The CPS data are subject to both nonsampling and sampling errors.

Standard errors were estimated using the generalized variance function prior to 2005 for March CPS data and prior to 2010 for October CPS data. The generalized variance function is a simple model that expresses the variance as a function of the expected value of a survey estimate. Standard errors were estimated using replicate weight methodology beginning in 2005 for March CPS data and beginning in 2010 for October CPS data. Those interested in using CPS household-level supplement replicate weights to calculate variances may refer to Estimating Current Population Survey (CPS) Household- Level Supplement Variances Using Replicate Weights at http://thedataweb.rm.census.gov/pub/cps/supps/HH-level_Use_of_the_Public_Use_Replicate_Weight_File.doc.

Further information on the CPS may be obtained from

Education and Social Stratification Branch Population Division
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
http://www.census.gov/cps

Dropouts

Each October, the Current Population Survey (CPS) includes supplemental questions on the enrollment status of the population age 3 years and over as part of the monthly basic survey on labor force participation. In addition to gathering the information on school enrollment, with the limitations on accuracy as noted below under “School Enrollment,” the survey data permit calculations of dropout rates. Both status and event dropout rates are tabulated from the October CPS. Event rates describe the proportion of students who leave school each year without completing a high school program. Status rates provide cumulative data on dropouts among all young adults within a specified age range. Status rates are higher than event rates because they include all dropouts ages 16 through 24, regardless of when they last attended school.

In addition to other survey limitations, dropout rates may be affected by survey coverage and exclusion of the institutionalized population. The incarcerated population has grown rapidly and has a high dropout rate. Dropout rates for the total population might be higher than those for the noninstitutionalized population if the prison and jail populations were included in the dropout rate calculations. On the other hand, if military personnel, who tend to be high school graduates, were included, it might offset some or all of the impact from the theoretical inclusion of the jail and prison populations.

Another area of concern with tabulations involving young people in household surveys is the relatively low coverage ratio compared to older age groups. CPS undercoverage results from missed housing units and missed people within sample households. Overall CPS undercoverage for October 2016 is estimated to be about 11 percent. CPS coverage varies with age, sex, and race. Generally, coverage is larger for females than for males and larger for non-Blacks than for Blacks. This differential coverage is a general problem for most household-based surveys. Further information on CPS methodology may be found in the technical documentation at http://www.census.gov/cps.

Further information on the calculation of dropouts and dropout rates may be obtained from the Trends in High School Dropout and Completion Rates in the United States report at https://nces.ed.gov/programs/dropout/index.asp or by contacting

Joel McFarland
Annual Reports and Information Staff
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
[email protected]

Educational Attainment

Reports documenting educational attainment are produced by the Census Bureau using the March Current Population Survey (CPS) supplement (Annual Social and Economic supplement [ASEC]).

Currently, the ASEC supplement consists of approximately 70,000 interviewed households. Both recent and earlier editions of Educational Attainment in the United States may be downloaded at https://www.census.gov/topics/education/educational-attainment/data/tables.All.html.

In addition to the general constraints of CPS, some data indicate that the respondents have a tendency to overestimate the educational level of members of their household. Some inaccuracy is due to a lack of the respondent’s knowledge of the exact educational attainment of each household member and the hesitancy to acknowledge anything less than a high school education.

Further information on educational attainment data from CPS may be obtained from

Education and Social Stratification Branch
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
https://www.census.gov/topics/education/educational-attainment/data.html

School Enrollment

Each October, the Current Population Survey (CPS) includes supplemental questions on the enrollment status of the population age 3 years and over. Currently, the October supplement consists of approximately 54,000 interviewed households, the same households interviewed in the basic Current Population Survey. The main sources of nonsampling variability in the responses to the supplement are those inherent in the survey instrument. The question of current enrollment may not be answered accurately for various reasons. Some respondents may not know current grade information for every student in the household, a problem especially prevalent for households with members in college or in nursery school. Confusion over college credits or hours taken by a student may make it difficult to determine the year in which the student is enrolled. Problems may occur with the definition of nursery school (a group or class organized to provide educational experiences for children) where respondents’ interpretations of “educational experiences” vary.

For the October 2016 basic CPS, the household-level nonresponse rate was 12.7 percent. The person-level nonresponse rate for the school enrollment supplement was an additional 8.0 percent. Since the basic CPS nonresponse rate is a household-level rate and the school enrollment supplement nonresponse rate is a person-level rate, these rates cannot be combined to derive an overall nonresponse rate. Nonresponding households may have fewer persons than interviewed ones, so combining these rates may lead to an overestimate of the true overall nonresponse rate for persons for the school enrollment supplement.

Although the principal focus of the October supplement is school enrollment, in some years the supplement has included additional questions on other topics. In 2010 and 2012, for example, the October supplement included additional questions on computer and internet use.

Further information on CPS methodology may be obtained from http://www.census.gov/cps.

Further information on the CPS School Enrollment Supplement may be obtained from

Education and Social Stratification Branch
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
https://www.census.gov/topics/education/school-enrollment.html

Decennial Census, Population Estimates, and Population Projections

The decennial census is a universe survey mandated by the U.S. Constitution. It is a questionnaire sent to every household in the country, and it is composed of seven questions about the household and its members (name, sex, age, relationship, Hispanic origin, race, and whether the housing unit is owned or rented). The Census Bureau also produces annual estimates of the resident population by demographic characteristics (age, sex, race, and Hispanic origin) for the nation, states, and counties, as well as national and state projections for the resident population. The reference date for population estimates is July 1 of the given year. With each new issue of July 1 estimates, the Census Bureau revises estimates for each year back to the last census. Previously published estimates are superseded and archived.

Census respondents self-report race and ethnicity. The race questions on the 1990 and 2000 censuses differed in some significant ways. In 1990, the respondent was instructed to select the one race “that the respondent considers himself/herself to be,” whereas in 2000, the respondent could select one or more races that the person considered himself or herself to be. American Indian, Eskimo, and Aleut were three separate race categories in 1990; in 2000, the American Indian and Alaska Native categories were combined, with an option to write in a tribal affiliation. This write-in option was provided only for the American Indian category in 1990. There was a combined Asian and Pacific Islander race category in 1990, but the groups were separated into two categories in 2000.

The census question on ethnicity asks whether the respondent is of Hispanic origin, regardless of the race option(s) selected; thus, persons of Hispanic origin may be of any race. In the 2000 census, respondents were first asked, “Is this person Spanish/Hispanic/Latino?” and then given the following options: No, not Spanish/Hispanic/Latino; Yes, Puerto Rican; Yes, Mexican, Mexican American, Chicano; Yes, Cuban; and Yes, other Spanish/Hispanic/Latino (with space to print the specific group). In the 2010 census, respondents were asked “Is this person of Hispanic, Latino, or Spanish origin?” The options given were No, not of Hispanic, Latino, or Spanish origin; Yes, Mexican, Mexican Am., Chicano; Yes, Puerto Rican; Yes, Cuban; and Yes, another Hispanic, Latino, or Spanish origin—along with instructions to print “Argentinean, Colombian, Dominican, Nicaraguan, Salvadoran, Spaniard, and so on” in a specific box.

The 2000 and 2010 censuses each asked the respondent “What is this person’s race?” and allowed the respondent to select one or more options. The options provided were largely the same in both the 2000 and 2010 censuses: White; Black, African American, or Negro; American Indian or Alaska Native (with space to print the name of enrolled or principal tribe); Asian Indian; Japanese; Native Hawaiian; Chinese; Korean; Guamanian or Chamorro; Filipino; Vietnamese; Samoan; Other Asian; Other Pacific Islander; and Some other race. The last three options included space to print the specific race. Two significant differences between the 2000 and 2010 census questions on race were that no race examples were provided for the “Other Asian” and “Other Pacific Islander” responses in 2000, whereas the race examples of “Hmong, Laotian, Thai, Pakistani, Cambodian, and so on” and “Fijian, Tongan, and so on,” were provided for the “Other Asian” and “Other Pacific Islander” responses, respectively, in 2010.

The census population estimates program modified the enumerated population from the 2010 census to produce the population estimates base for 2010 and onward. As part of the modification, the Census Bureau recoded the “Some other race” responses from the 2010 census to one or more of the five OMB race categories used in the estimates program (for more information, see http://www.census.gov/programs-surveys/popest/technical-documentation/methodology.html). Further information on the decennial census may be obtained from http://www.census.gov.

Department of Justice

Bureau of Justice Statistics

A division of the U.S. Department of Justice Office of Justice Programs, the Bureau of Justice Statistics (BJS) collects, analyzes, publishes, and disseminates statistical information on crime, criminal offenders, victims of crime, and the operations of the justice system at all levels of government and internationally. It also provides technical and financial support to state governments for development of criminal justice statistics and information systems on crime and justice.

For information on the BJS, see https://www.bjs.gov/.

National Crime Victimization Survey

The National Crime Victimization Survey (NCVS), administered for the U.S. Bureau of Justice Statistics (BJS) by the U.S. Census Bureau, is the nation’s primary source of information on crime and the victims of crime. Initiated in 1972 and redesigned in 1992 and 2016, the NCVS collects detailed information on the frequency and nature of the crimes of rape, sexual assault, robbery, aggravated and simple assault, theft, household burglary, and motor vehicle theft experienced by Americans and American households each year. The survey measures both crimes reported to the police and crimes not reported to the police.

NCVS estimates presented may differ from those in previous published reports. This is because a small number of victimizations, referred to as series victimizations, are included using a new counting strategy. High-frequency repeat victimizations, or series victimizations, are six or more similar but separate victimizations that occur with such frequency that the victim is unable to recall each individual event or describe each event in detail. As part of ongoing research efforts associated with the redesign of the NCVS, BJS investigated ways to include high-frequency repeat victimizations, or series victimizations, in estimates of criminal victimization. Including series victimizations results in more accurate estimates of victimization. BJS has decided to include series victimizations using the victim’s estimates of the number of times the victimizations occurred over the past 6 months, capping the number of victimizations within each series at a maximum of 10. This strategy for counting series victimizations balances the desire to estimate national rates and account for the experiences of persons who have been subjected to repeat victimizations against the desire to minimize the estimation errors that can occur when repeat victimizations are reported. Including series victimizations in national rates results in rather large increases in the level of violent victimization; however, trends in violence are generally similar regardless of whether series victimizations are included. For more information on the new counting strategy and supporting research, see Methods for Counting High-Frequency Repeat Victimizations in the National Crime Victimization Survey at https://www.bjs.gov/content/pub/pdf/mchfrv.pdf.

Readers should note that in 2003, in accordance with changes to the Office of Management and Budget’s standards for the classification of federal data on race and ethnicity, the NCVS item on race/ethnicity was modified. A question on Hispanic origin is now followed by a new question on race. The new question about race allows the respondent to choose more than one race and delineates Asian as a separate category from Native Hawaiian or Other Pacific Islander. An analysis conducted by the Demographic Surveys Division at the U.S. Census Bureau showed that the new race question had very little impact on the aggregate racial distribution of the NCVS respondents, with one exception: There was a 1.6 percentage point decrease in the percentage of respondents who reported themselves as White. Due to changes in race/ethnicity categories, comparisons of race/ethnicity across years should be made with caution.

There were changes in the sample design and survey methodology in the 2006 NCVS that may have affected survey estimates. Caution should be used when comparing the 2006 estimates to estimates of other years. Data from 2007 onward are comparable to earlier years. Analyses of the 2007 estimates indicate that the program changes made in 2006 had relatively small effects on NCVS estimates. For more information on the 2006 NCVS data, see Criminal Victimization, 2006, at https://www.bjs.gov/content/pub/pdf/cv06.pdf; the NCVS 2006 technical notes, at https://www.bjs.gov/content/pub/pdf/cv06tn.pdf; and Criminal Victimization, 2007, at https://bjs.gov/content/pub/pdf/cv07.pdf.

The NCVS sample was redesigned in 2016 in order to account for changes in the U.S. population identified through the 2010 Decennial Census and to make it possible to produce state- and local-level victimization estimates for the largest 22 states and specific metropolitan areas within those states. Because of this redesign, 2016 victimization data are not comparable to data from 2015 and prior years. For more information on the 2016 NCVS data, see Criminal Victimization, 2016, at https://www.bjs.gov/content/pub/pdf/cv16.pdf, and the technical notes, at https://www.bjs.gov/content/pub/pdf/ncvstd16.pdf.

The number of NCVS-eligible households in the sample in 2016 was about 134,690. Households were selected using a stratified, multistage cluster design. In the first stage, the primary sampling units (PSUs), consisting of counties or groups of counties, were selected. In the second stage, smaller areas, called Enumeration Districts (EDs), were selected from each sampled PSU. Finally, from selected EDs, clusters of four households, called segments, were selected for interview. At each stage, the selection was done proportionate to population size in order to create a self-weighting sample. The final sample was augmented to account for households constructed after the decennial census. Within each sampled household, the U.S. Census Bureau interviewer attempts to interview all household members age 12 and over to determine whether they had been victimized by the measured crimes during the 6 months preceding the interview.

The first NCVS interview with a housing unit is conducted in person. Subsequent interviews are conducted by telephone, if possible. Households remain in the sample for 3 years and are interviewed seven times at 6-month intervals. Since the survey’s inception, the initial interview at each sample unit has been used only to bound future interviews to establish a time frame to avoid duplication of crimes uncovered in these subsequent interviews. Beginning in 2006, data from the initial interview have been adjusted to account for the effects of bounding and have been included in the survey estimates. After a household has been interviewed its seventh time, it is replaced by a new sample household. In 2016, the household response rate was about 78 percent and the completion rate for persons within households was about 84 percent. Weights were developed to permit estimates for the total U.S. population 12 years and older.

Further information on the NCVS may be obtained from

Rachel E. Morgan
Victimization Statistics Branch
Bureau of Justice Statistics
[email protected]
http://www.bjs.gov/

School Crime Supplement

Created as a supplement to the NCVS and co-designed by the National Center for Education Statistics and Bureau of Justice Statistics, the School Crime Supplement (SCS) survey has been conducted in 1989, 1995, and biennially since 1999 to collect additional information about school-related victimizations on a national level. The SCS was designed to assist policymakers, as well as academic researchers and practitioners at federal, state, and local levels, to make informed decisions concerning crime in schools. The survey asks students a number of key questions about their experiences with and perceptions of crime and violence that occurred inside their school, on school grounds, on the school bus, or on the way to or from school. Students are asked additional questions about security measures used by their school, students’ participation in after-school activities, students’ perceptions of school rules, the presence of weapons and gangs in school, the presence of hate-related words and graffiti in school, student reports of bullying and reports of rejection at school, and the availability of drugs and alcohol in school. Students are also asked attitudinal questions relating to fear of victimization and avoidance behavior at school.

The SCS survey was conducted for a 6-month period from January through June in all households selected for the NCVS (see discussion above for information about the NCVS sampling design and changes to the race/ethnicity variable beginning in 2003). Within these households, the eligible respondents for the SCS were those household members who had attended school at any time during the 6 months preceding the interview, were enrolled in grades 6–12, and were not home schooled. In 2007, the questionnaire was changed and household members who attended school sometime during the school year of the interview were included. The age range of students covered in this report is 12–18 years of age. Eligible respondents were asked the supplemental questions in the SCS only after completing their entire NCVS interview. It should be noted that the first or unbounded NCVS interview has always been included in analysis of the SCS data and may result in the reporting of events outside of the requested reference period.

The prevalence of victimization for 1995, 1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, and 2015 was calculated by using NCVS incident variables appended to the SCS data files of the same year. The NCVS type of crime variable was used to classify victimizations of students in the SCS as serious violent, violent, or theft. The NCVS variables asking where the incident happened (at school) and what the victim was doing when it happened (attending school or on the way to or from school) were used to ascertain whether the incident happened at school. Only incidents that occurred inside the United States are included.

In 2001, the SCS survey instrument was modified from previous collections. First, in 1995 and 1999, “at school” was defined for respondents as in the school building, on the school grounds, or on a school bus. In 2001, the definition for “at school” was changed to mean in the school building, on school property, on a school bus, or going to and from school. This change was made to the 2001 questionnaire in order to be consistent with the definition of “at school” as it is constructed in the NCVS and was also used as the definition in subsequent SCS collections. Cognitive interviews conducted by the U.S. Census Bureau on the 1999 SCS suggested that modifications to the definition of “at school” would not have a substantial impact on the estimates.

In terms of the numbers of students participating in the SCS, 6,300 participated in 2005, 6,500 participated in 2007, 5,000 participated in 2009, 6,500 in 2011, 5,700 in 2013, and 4,700 in 2015.

In the 2005, 2007, 2009, 2011, 2013, and 2015 SCS, the household completion rates were 91 percent, 90 percent, 92 percent, 91 percent, 86 percent, and 83 percent, respectively, and the student completion rates were 62 percent, 58 percent, 56 percent, 63 percent, 60 percent, and 58 percent, respectively. The overall SCS unit response rates (calculated by multiplying the household completion rate by the student completion rate) were about 56 percent in 2005, 53 percent in 2007, 51 percent in 2009, 57 percent in 2011, 51 percent in 2013, and 48 percent in 2015. (Starting in 2011, overall SCS unit response rates are weighted.)

There are two types of nonresponse: unit and item nonresponse. NCES requires that any stage of data collection within a survey that has a unit base-weighted response rate of less than 85 percent be evaluated for the potential magnitude of unit nonresponse bias before the data or any analysis using the data may be released (NCES Statistical Standards, 2002, at https://nces.ed.gov/statprog/2002/std4_4.asp). Due to the low unit response rate in 2005, 2007, 2009, 2011, 2013, and 2015, a unit nonresponse bias analysis was done. Unit response rates indicate how many sampled units have completed interviews. Because interviews with students could only be completed after households had responded to the NCVS, the unit completion rate for the SCS reflects both the household interview completion rate and the student interview completion rate. Nonresponse can greatly affect the strength and application of survey data by leading to an increase in variance as a result of a reduction in the actual size of the sample and can produce bias if the nonrespondents have characteristics of interest that are different from the respondents.

In order for response bias to occur, respondents must have different response rates and responses to particular survey variables. The magnitude of unit nonresponse bias is determined by the response rate and the differences between respondents and nonrespondents on key survey variables. Although the bias analysis cannot measure response bias since the SCS is a sample survey and it is not known how the population would have responded, the SCS sampling frame has four key student or school characteristic variables for which data are known for respondents and nonrespondents—sex, race/ethnicity, household income, and urbanicity—all of which are associated with student victimization. To the extent that there are differential responses by respondents in these groups, nonresponse bias is a concern.

In 2005, the analysis of unit nonresponse bias found evidence of bias for the race, household income, and urbanicity variables. White (non-Hispanic) and Other (non-Hispanic) respondents had higher response rates than Black (non-Hispanic) and Hispanic respondents. Respondents from households with an income of $35,000–$49,999 and $50,000 or more had higher response rates than those from households with incomes of less than $7,500, $7,500–$14,999, $15,000–$24,999, and $25,000–$34,999. Respondents who live in urban areas had lower response rates than those who live in rural or suburban areas. Although the extent of nonresponse bias cannot be determined, weighting adjustments, which corrected for differential response rates, should have reduced the problem.

In 2007, the analysis of unit nonresponse bias found evidence of bias by the race/ethnicity and household income variables. Hispanic respondents had lower response rates than other races/ethnicities. Respondents from households with an income of $25,000 or more had higher response rates than those from households with incomes of less than $25,000. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting that the nonresponse bias has little impact on the overall estimates.

In 2009, the analysis of unit nonresponse bias found evidence of potential bias for the race/ethnicity and urbanicity variables. White students and students of other races/ethnicities had higher response rates than did Black and Hispanic respondents. Respondents from households located in rural areas had higher response rates than those from households located in urban areas. However, when responding students are compared to the eligible NCVS sample, there were no measurable differences between the responding students and the eligible students, suggesting that the nonresponse bias has little impact on the overall estimates.

In 2011, the analysis of unit nonresponse bias found evidence of potential bias for the age variable. Respondents 12 to 17 years old had higher response rates than did 18-year-old respondents in the NCVS and SCS interviews. Weighting the data adjusts for unequal selection probabilities and for the effects of nonresponse. The weighting adjustments that correct for differential response rates are created by region, age, race, and sex, and should have reduced the effect of nonresponse.

In 2013, the analysis of unit nonresponse bias found evidence of potential bias for the age variable in the SCS respondent sample. Students age 14 and those from the western region showed percentage bias exceeding 5 percent; however, both subgroups had the highest response rate out of their respective categories. All other subgroups evaluated showed less than 1 percent nonresponse bias and had between 0.3 and 2.6 percent difference between the response population and the eligible population.

In the 2015 SCS, evidence of potential nonresponse bias was found in the race, urbanicity, region, and age subgroups. In addition, respondents in the age 14 and rural subgroups had significantly higher nonresponse bias estimates compared to other age and urbanicity subgroups, while respondents who were Asian and respondents who were from the Northeast had significantly lower response bias estimates compared to other race and region subgroups. Thus, the analysis indicates that there are significant nonresponse biases in the 2015 SCS data and that caution should be used when comparing responses among subgroups in the SCS.

For most survey items in most years of the SCS survey, however, response rates have been high—typically over 97 percent of all eligible respondents, meaning there is little potential for item nonresponse bias for most items in the survey. Weights have been developed to compensate for differential probabilities of selection and nonresponse. The weighted data permit inferences about the eligible student population who were enrolled in schools in all SCS data years.

Further information about the SCS may be obtained from

Rachel Hansen
Sample Surveys Division
Cross-Sectional Surveys Branch
National Center for Education Statistics
550 12th Street SW
Washington, DC 20202
(202) 245-7082
[email protected]
https://nces.ed.gov/programs/crime