1 Overview

The Dynamic Learning Maps® (DLM®) Alternate Assessment System assesses student achievement in English language arts (ELA), mathematics, and science for students with the most significant cognitive disabilities in Grades 3–8 and high school. The purpose of the system is to improve academic experiences and outcomes for students with the most significant cognitive disabilities by setting high and actionable academic expectations and providing appropriate and effective supports to educators. Results from the DLM alternate assessment are intended to support interpretations about what students know and can do and inferences about student achievement in the given subject. Results provide information that can guide instructional decisions and be used for state accountability programs.

The DLM system is developed and administered by Accessible Teaching, Learning, and Assessment Systems (ATLAS), a research center within the University of Kansas’s Achievement and Assessment Institute. The DLM system is based on the core belief that all students should have access to challenging, grade-level or grade-band content. DLM assessments give students with the most significant cognitive disabilities opportunities to demonstrate what they know and can do in ways that traditional paper-and-pencil assessments cannot.

ATLAS created a complete technical manual after the first operational administration in 2015–2016. After each annual administration, a technical manual update is published to supplement the full technical manual by providing information specific to that administration year. This technical manual update provides information for the 2023–2024 administration of science assessments. Only sections with updated information are included in this manual. For a complete description of the DLM assessment system, refer to previous technical manuals, including the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).

Due to differences in the development timeline for science, separate technical manuals are prepared for ELA and mathematics (see Dynamic Learning Maps Consortium, 2024a, 2024b).

1.1 Current DLM Collaborators for Development and Implementation

During the 2023–2024 academic year, DLM assessments were available to students in the District of Columbia and 21 states: Alaska, Arkansas, Colorado, Delaware, Illinois, Iowa, Kansas, Maryland, Missouri, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Oklahoma, Pennsylvania, Rhode Island, Tennessee, Utah, West Virginia, and Wisconsin. Colorado and Tennessee administered assessments in ELA and mathematics only, and the District of Columbia administered assessments in science only. For each DLM member state, there is a governance board composed of representatives from the education agencies of the state or district. Representatives have expertise in special education and state assessment administration. The DLM Governance Board advises on the administration, maintenance, and enhancement of the DLM system.

In addition to ATLAS and the DLM Governance Board, other key partners include the Center for Literacy and Disability Studies at the University of North Carolina at Chapel Hill and the Assessment and Technology Solutions Previously known as Agile Technology Solutions. at the University of Kansas.

The DLM system is also supported by a technical advisory committee. The DLM Technical Advisory Committee members possess decades of expertise in large-scale assessments, accessibility for alternate assessments, diagnostic classification modeling, and assessment validation. The DLM Technical Advisory Committee provides advice and guidance on the technical adequacy of the DLM assessments.

1.2 Theory of Action and Interpretive Argument

The theory of action that guided the design of the DLM system for science was similar to the theory of action for the ELA and mathematics assessments, which was formulated in 2011, revised in December 2013, and revised again in 2019. It expresses the belief that high expectations for students with the most significant cognitive disabilities, combined with appropriate educational supports and diagnostic tools for educators, result in improved academic experiences and outcomes for students and educators.

The DLM Theory of Action expresses a commitment to provide students with the most significant cognitive disabilities access to an assessment system that is capable of validly and reliably evaluating their achievement. Ultimately, when adhering to the DLM Theory of Action, students will make progress toward higher expectations, educators will make instructional decisions based on data, educators will hold higher expectations of students, and state and district education agencies will use results for monitoring and resource allocation.

Assessment validation for the DLM assessments uses a three-tiered approach, which includes the specifications of 1) the DLM Theory of Action defining statements in the validity argument that must be evident to achieve the goals of the system, 2) an interpretive argument defining propositions that must be evaluated to support each statement in the DLM Theory of Action, and 3) validity studies to evaluate each proposition in the interpretive argument.

After identifying these overall guiding principles and anticipated outcomes, specific elements of the DLM Theory of Action were articulated to inform assessment design and to highlight the associated validity argument. The DLM Theory of Action includes the assessment’s intended effects (long-term outcomes); statements related to design, delivery, and scoring; and action mechanisms (i.e., connections between the statements). In Figure 1.1, the chain of reasoning in the DLM Theory of Action is demonstrated broadly by the order of the four sections from left to right. Dashed lines represent connections that are present when the optional instructionally embedded assessments are used. Design statements serve as inputs to delivery, which inform scoring and reporting, which collectively lead to the long-term outcomes for various stakeholders. The chain of reasoning is made explicit by the numbered arrows between the statements.

Figure 1.1: Dynamic Learning Maps Theory of Action

The DLM Theory of Action, which shows each statement and how it is connected to other statements through the chain of reasoning.

1.3 Technical Manual Overview

This manual provides evidence collected during the 2023–2024 administration of DLM science assessments.

Chapter 1 provides a brief overview of the DLM system, including collaborators for development and implementation, the DLM Theory of Action and interpretive argument, and a summary of contents of the remaining chapters. While subsequent chapters describe the individual components of the assessment system separately, validity evidence is presented throughout this manual.

Chapter 2 was not updated for 2023–2024. For a full description of the development of the EEs, including the intended coverage with the Framework (National Research Council, 2012) and the NGSS (NGSS Lead States, 2013), see the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).

Chapter 3 was not updated for 2023–2024, as development activities have focused on content for a new assessment blueprint planned to be administered operationally in spring 2027. For a full description of the design and development of the current science assessment, see the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017). A description of the operational item pool that will be used until the new blueprint is administered operationally, including evidence of item quality, operational item data, and differential item functioning can be found in the 2022–2023 Technical Manual Update—Science (Dynamic Learning Maps Consortium, 2023).

Chapter 4 describes assessment delivery, including updated procedures and data collected in 2023–2024. The chapter provides information on adaptive routing, and accessibility support selections. The chapter also provides evidence from test administrators, including user experience with the DLM system and student opportunity to learn the tested content during instruction.

Chapter 5 provides a brief summary of the psychometric model used to score DLM assessments. The chapter includes a summary of the 2023–2024 calibrated parameters. For a complete description of the modeling method, see the 2021–2022 Technical Manual Update—Science (Dynamic Learning Maps Consortium, 2022).

Chapter 6 was not updated for 2023–2024; no changes were made to the cut points used for determining performance levels on DLM assessments. See the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017) for a description of the methods, preparations, procedures, and results of the original standard-setting and evaluation of the impact data. For a description of the changes made to the cut points used in scoring DLM assessments for grade 3 and grade 7 during the 2018–2019 administration, see the 2018–2019 Technical Manual Update—Science (Dynamic Learning Maps Consortium, 2019).

Chapter 7 reports the 2023–2024 operational results, including student participation data. The chapter details the percentage of students achieving at each performance level; subgroup performance by gender, race, ethnicity, and English-learner status; the percentage of students who showed mastery at each linkage level; and a study related to educators’ ratings of student mastery in science. Finally, the chapter describes changes to score reports and data files during the 2023–2024 administration.

Chapter 8 summarizes reliability evidence for the 2023–2024 administration, including a brief overview of the methods used to evaluate reliability by linkage level, Essential Element, domain and topic, and subject (overall performance). For a complete description of the reliability background and methods, see the 2021–2022 Technical Manual Update—Science (Dynamic Learning Maps Consortium, 2022).

Chapter 9 describes updates to the professional development available in 2023–2024, including participation rates and evaluation results.

Chapter 10 synthesizes the validity evidence provided in the previous chapters. It evaluates the extent to which evidence supports the claims in the DLM Theory of Action. The chapter ends with a description of future research and ongoing initiatives for continuous improvement.