Assessment Instrument Table: STAR Reading™ Element Instrument Name Purpose (Intended Use) Population When? How frequently? Content Area (s) Description Name of specific instrument (more than vendor name). Name of the company or organization that produces the instrument. The described purpose and appropriate uses of the instrument. Who (which students) could be assessed using the instrument. How frequently the instrument can be administered in a school year, and recommended or required administration windows. Content area or areas being assessed. Assessment Instrument Information STAR Reading™ Renaissance Learning™, Inc. STAR Reading is a student-based, computer adaptive assessment for measuring student achievement in reading. STAR fulfills a variety of assessment purposes, including interim assessment, screening, standards benchmarking, skills-based reporting and instructional planning, and progress monitoring. STAR Reading is by far the most widely used reading assessment in K12 schools. Educators get valid, reliable, actionable data in about 15 minutes. Independent readers in grades 1 through 12 As an interim assessment, STAR was designed for frequent administration. Educators can administer STAR three times per year in fall, winter, and spring. Educators may also administer STAR as a progress monitoring assessment as often as weekly. If a school wants to see a trend line that estimates proficiency on state tests, they administer an additional STAR Reading and/or STAR Math test in late fall. STAR Reading is a K–12 assessment that focuses on measuring student performance with skills in five domains: • Word Knowledge and Skills • Comprehension Strategies and Constructing Meaning • Understanding Author’s Craft • Analyzing Literary Text • Analyzing Argument and Evaluating Text Page | 1 Element Learning Objectives Description Specific learning objectives being assessed, at as detailed a level as is provided. This may be "topics" or categories or may be actual learning objective statements. Assessment Instrument Information STAR Reading is a K–12 assessment that focuses on measuring student performance with skills in five domains: • Word Knowledge and Skills • Comprehension Strategies and Constructing Meaning • Understanding Author’s Craft • Analyzing Literary Text • Analyzing Argument and Evaluating Text STAR Reading’s items test 475 grade-specific skills (multiple items are available to measure each skill). Closely related skills are organized into skill sets, which are provided on the following table. Domain Word Knowledge and Skills Skill Set Vocabulary Strategies Vocabulary Knowledge Comprehension Strategies and Constructing Meaning Reading Process Skills Constructing Meaning Organizational Structure Analyzing Literary Text Literary Elements Genre Characteristics Individual Metrics The scores provided at the individual (student) level. Understanding Author’s Craft Author’s Choices Analyzing Argument and Evaluating Text Analysis Evaluation Scaled score (SS) all the results of STAR Reading tests across grade levels are converted to a common scale using an IRT model; these scores range from 0 to 1400. These Scaled Scores are useful in comparing student performance over time and in identifying performance and all criterion and norms associated with that scale. Page | 2 Element Description Assessment Instrument Information The following scores which include comparison points in the score are also provided: Grade equivalent (GE) score indicates the grade placement of students for whom a particular score is typical. If a student receives a GE of 10.7, this means that the student scored as well on STAR Reading as did the typical student in the seventh month of grade 10. Ranging 0.0 to 12.9+, GE scores represent how a student’s test performance compares with other students nationally; they are norm referenced. Estimated oral reading fluency (Est. ORF) is reported as an estimated number of words a student can read correctly within a one-minute time span on grade-level-appropriate text. It is an estimate of a student’s ability to read words quickly and accurately in order to comprehend text efficiently. Students with oral reading fluency demonstrate accurate decoding, automatic word recognition, and appropriate use of the rhythmic aspects of language (e.g., intonation, phrasing, pitch, and emphasis). Instructional reading level (IRL) is a criterion-referenced score that indicates the highest reading level at which a student is at least 80 percent proficient at recognizing words and understanding material with instructional assistance. Percentile rank (PR), ranging from 1–99, is a norm referenced score that indicates the percentage of a student’s peers whose scores were equal to or lower than the score of that student. Percentile Ranks show how an individual student’s performance compares to that of his or her same-grade peers on the national level. Normal curve equivalent (NCE) score, ranging from 1–99, is norm referenced. It is similar to the percentile rank score but is based on an equal-interval scale; NCEs are scaled in such a way that they have a normal distribution, with a mean of 50 and a standard deviation of 21.06 in the normative sample for a given test. Student growth percentile (SGP) is a measure of growth between a pre- and post-test relative to the growth made by other students in the same grade with the same pre-test score. It is a simple and effective way for educators to interpret a student’s growth rate relative to that of his or her academic peers nationwide. SGPs for STAR Reading are calculated using an approach similar to the Colorado Growth Model. Lexile® Measure - The Lexile scale is a common scale for both text measure (readability or text difficulty) and reader measure (reading achievement scores); in the Lexile Framework, both text difficulty and person reading ability are measured on the same scale. The Lexile Framework expresses a book’s reading difficulty level (and students’ reading ability levels) on a continuous scale ranging from below 0 (BR400L) to 1825L or more. Zone of Proximal Development (ZPD) is an individualized range of readability levels based on a student’s results from a STAR Reading assessment. Books students choose to read within their ZPD range should neither be too difficult nor too easy and should allow students to experience optimal growth. A Lexile ZPD range is also available, which is a student’s ZPD Range converted to the MetaMetrics® Lexile scale of the Page | 3 Element Description Assessment Instrument Information readability of text. Individual Comparison Points (cut scores) Information provided regarding how good is good enough performance on the instrument. STAR Reading provides maps of Scale Score Ranges to: • Grade Level Equivalent Scores (from 0.0 to 12.9+) • Percentile Ranks (associated with Grade Placements) • Instructional Reading Level Conversions These maps provide comparison points for Scaled Scores by grade level. Districts can set performance categories based on their own cut scores for the STAR Scaled Score to colorcoded individual and group performance by category, such as: At Benchmark, On Watch, Intervention, and Urgent Intervention. Once cut-scores have been set, STAR reports categorize individual students’ scaled scores according to these color-coded performance categories. Aggregate Metrics Scores provided at the group level. The groups for which scores are reported. The group could be a grade level, school, district, or disaggregated groups (e.g. race/ethnicity, gender, IEP status, FRL status) Specify the group(s) and the score(s) provided. All but the STAR Reading Scaled Score include comparison points as part of the metric definition. When districts set cut scores for individual student scaled scores to establish performance categories, these categories are used to provide aggregate metrics, including the percent and number of students by district benchmark category (available by grade at the district, school levels) across years of available data. These metrics can be calculated using cross-sectional data (same grade year to year) or for the same students over multiple years. [Longitudinal Report] The following additional aggregate metrics are also provided: • • • Median Student Growth Percentile -- the middle student growth percentile within the included group. This metric is reported for different time periods (fall to spring, spring to spring) by grade level within school, grade level within the district, and by class.[Growth Report, Student Growth Percentile Report] Average scores at the school by grade level and classroom levels of the following individual metrics: Scale Score, Grade Equivalent, Percentile Rank, Norm Curve Equivalent, Instructional Reading Level, and Estimated Oral Reading Fluency [Growth Report] Percent of students in or above the estimated mastery range for reading standards (CCSS or CAS) by school and by class within the school - STAR Reading provides an estimate of the students' mastery of standards by aligning them to the same 1400-point difficulty scale used to report STAR scores. The Estimated Mastery Range identifies a band of scores where the student is just below or above mastery. Page | 4 Element Description Aggregate Comparison Points (vendor) Information provided regarding how good is good enough performance at the group level. Assessment Instrument Information The percentage of students who score in or above this range indicates overall progress toward standards mastery. [State Standards Report] Because most of the individual metrics provided for Star Reading are norm-referenced scores, almost all of the aggregate metric provided by this vendor also include a comparison point within the metric definition. This includes the following metrics (described above): • Percent/Number Scoring at District-determined Performance Levels -- note districts determine the comparison points used in these metrics when they set their own cut scores for different performance levels. • Median Growth Percentiles • Average Grade Equivalent • Average Percentile Rank • Average Norm Curve Equivalent • Average Instructional Reading Level • Average Estimated Oral Reading Fluency • Percent of Students in or Above Estimated Mastery Range for Reading Standards Page | 5 Element Aggregate Comparison Points (CDE) Description Cut scores identified by CDE for requests to reconsider Assessment Instrument Information See tables below for aggregate metrics for which CDE established comparison points as part of the 2014-15 Request-to-Reconsider process. CDE provided comparison points include Fall and Spring Mean Scale Scores and Median Growth Percentiles for each grade level. Note: The CDE comparison points for STAR Reading for the 2016-17 Request-to-Reconsider process are currently being revised. READING – Scale Scores by Grade Level Fall Scale Scores Data Reports Description of data reports that are provided/available at the individual and aggregate level(s). Spring Scale Scores Scale Score Growth (Fall to Spring) Grade 50th Percentile Mean Scale Score 50th Percentile Mean Scale Score Median Growth Percentile Meets Level 1 2 3 4 5 6 7 8 9 10 11 12 78 197 344 445 524 631 773 876 967 1075 1132 1178 181 334 436 515 619 757 861 959 1068 1126 1173 1220 50 50 50 50 50 50 50 50 50 50 50 50 Scores are displayed on a variety of reports that educators can choose to run at the classroom, grade, school, or district level. In addition, administrators can customize many of the STAR reports to view information about participation and performance across the district and by various demographic subgroups (for example, students receiving free and reduced lunch, English language learners, etc.). Below we describe some key STAR Reading Reports, including the levels for which the report is available. Growth Proficiency Report (student, class, grade, district) The Growth Proficiency Chart plots SGP and proficiency on a quadrant graph so that educators can easily see whether students are challenged and Page | 6 Element Alignment Description Information provided by the vendor about alignment of this instrument to other instruments, standards, etc. Assessment Instrument Information growing every year, regardless of their academic starting point. Growth Report (student, class, grade, school) show educators whether students are reaching their growth expectations. The Growth Report includes median student growth percentiles (SGP) and averages for the following metrics: SS, GE, PR, NCE, IRL, ORF. STAR Reading Scaled Scores range from 0–1400. STAR Reading Grade Equivalents range from 0.0–12.9+. IRL letter codes include PP (Pre-Primer), P (Primer, grades .1–.9), and PHS (Post-High School, grades 13.0+). The range of Percentile Ranks is 1–99. Normal Curve Equivalents range from 1–99 and appear similar to Percentile Ranks, but they have the advantage of being based on an equal interval scale. Longitudinal Reports (grade, school, district) show educators whether students in each grade are growing from year to year. Educators can compare the same grade year to year or the same students over multiple years. Metrics on this report include: Percent of Students by District Benchmark Category and PR. State Performance Reports (student, class, grade, school, district) predict student performance on highstakes tests. Predictions account for the amount of growth that typically occurs between the date of the last STAR test taken and the date of the state test. At the school, grade, and district levels, this report lists the percentage and number of students projected to be at each performance level assessed by the state test when the test is administered. At the class level, the report shows the average scores for the class. State Standards Reports (student, class, grade, district) gauge students’ current and projected mastery according to the Colorado Academic Standards as well as the Common Core State Standards. Users may choose which of the standards to view on these reports. At the student level, these reports measure an individual student’s performance on the state standards via SS; at the class level, the report shows the percentage of students demonstrating mastery of the standards; and at the district level, the report show how each grade level within a school or the district as a whole is performing. Samples of additional data reports and a detailed explanation of each report’s purpose(s) can be viewed at: http://doc.renlearn.com/KMNet/R0053249615EE616.pdf. Renaissance Learning provides an alignment report which illustrates how the specific skills assessed by STAR Reading map to the Colorado Academic Standards including Grade Level Expectations (GLEs) and Evidence Outcomes (EOs). See http://doc.renlearn.com/KMNet/R005599262950854.pdf for the alignment report for grades pre-K–12. Note that STAR Reading is also aligned to Standard 2, Reading for All Purposes, of the Colorado Academic Reading, Writing, and Communicating Standards. STAR Reading skills include alignments under each grade-level expectation of this standard for grades pre-K–12. Page | 7 Element Technical Quality Description Assessment Instrument Information Renaissance Learning provides a similar alignment report illustrating how the skills assessed by STAR Reading map to the Common Core State Standards (CCSS). See http://doc.renlearn.com/KMNet/R0054754DF237B33.pdf for this alignment report. During the analysis of the alignment between STAR Reading skills assessed to the CCSS, two types of alignment emerged and serve to clarify the way in which product skills meet standards. The alignment type was either direct or foundational. The CCSS the alignment report includes the following types of alignments: • Direct alignment – the product skill directly and specifically meets one or more of the skills in the standard either fully or partially. • Foundational alignment – the product skill is necessary to meet the skill in the standard either through a prerequisite or related relationship. Information about the The STAR Reading Technical Manual (http://doc.renlearn.com/KMNet/R004384310GJD780.pdf) presents technical quality of details about the assessment’s technical quality. Key information is summarized below. the instrument. Validity Reference to Validity refers to the extent to which a tool accurately measures the underlying construct that it is intended technical analysis if to measure. Validity evidence is frequently presented as correlations between an instrument and other available instruments that measure the same construct. Meta-Analysis shows the average uncorrected correlation electronically. between STAR Reading and all other reading tests to be 0.78, (the average correlation would have exceeded 0.85 if adjusted for range restriction and attenuation to less than perfect reliability). The National Center on Intensive Intervention (NCII) and Progress Monitoring also evaluated the technical quality of STAR Reading and determined that the vendor provided “Convincing Evidence” of the Validity of the Performance Level Score. Reliability Reliability refers to the degree of measurement precision of an instrument and the extent to which a test yields consistent results from one administration to another and from one test form to another. The STAR Reading tests provide two ways to evaluate the reliability of scores: reliability coefficients, which indicate the overall precision of a set of test scores, and conditional standard errors of measurement (CSEM), which provide an index of the degree of error in an individual test score. See the Technical manual for details regarding these reliability estimates. High levels of consistency were demonstrated by both types of reliability estimates. The National Center on Intensive Intervention (NCII) and Progress Monitoring also evaluated the technical quality of STAR Reading and determined that the vendor provided “Convincing Evidence” of the reliability of the performance level score. National Norms for STAR Reading were updated during the 2014-15 school year to represent more current estimates of the population of US school children. Introduced in June of 2011, STAR Reading was the first Page | 8 Element Description Assessment Instrument Information standards-based version of the test assessing a wide variety skills and instructional standards, as well as reading comprehension. A nationally representative sample of students who had tested in both the fall and spring (of one of two school years) were drawn from routine administrations. A post-stratification procedure was used to approximate the national proportions on key characteristics. Post stratification weights from the regional, district socio-economic status, and school size strata were computed and applied to each student’s Rasch ability estimate. Norms were developed based on the weighted Rasch ability estimates and then transformed to STAR Reading scaled scores. Page | 9
© Copyright 2026 Paperzz