Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Academic Assessment Committee: Basic Terms

Basic Terms

Basic Terms

A program director (PD) setting up assessment for a new program should consult the following list of terms.  For additional resources on setting up assessment, a PD will want to consult Curriculum Lifecycle and AAC Knowledge Pack

Assessment Plan This is a document that outlines program (student learning) outcomes, the direct and indirect assessment methods used to demonstrate the attainment of each outcome, a brief explanation of the assessment methods, an indication of which outcome(s) is/are addressed by each method, the intervals at which evidence is collected and reviewed, and the individual(s) responsible for the collection/review of evidence.
Program Description Please see separate section under Starting a Program in this LibGuide.
Program Outcomes Please see separate section under Starting a Program in this LibGuide.
Direct Measures (DM) Direct measures are those by which program learning is measured and are obtained by a process that is employed to gather data which requires subjects to display their knowledge, behavior, and thought processes that indicates a proficiency level of students' knowledge and/or skills. Also called “direct evidence”, this is learning that enables students to readily persist, graduate, transfer, and/or obtain work. Examples of direct measures include scores, or rates from capstone experiences, presentations, performances, portfolios, papers or other written work, licensure exams, and field experiences. (Suskie, 2018)
Indirect Measures (IM) Indirect measures are those by which program learning is inferred and are obtained by a process employed to gather data which asks subjects to reflect upon their knowledge, behaviors, or thought processes. Responses may indicate an opinion or level of satisfaction, including data that reflects on students’ knowledge, understanding, or performance. Examples of indirect measures are retention rates, graduation rates, scores on tests for future study (e.g. GRE), placement rates, alumni perceptions, students’ ratings of program, and students’ awards earned. (Suskie, 2018).
Targets A target states a standard or minimum threshold of demonstrated performance for a measure (DM or IM). A good target would be specific, ambitious while achievable, and easily interpreted by a reader. An example would be at least 80% of artifacts (assignment submissions) will be scored at 3 or higher on a 4 level rubric for a DM on portfolios.
Assessment Results or Findings Submitted at the end of the academic year, this would be a comparison of actual vs. expected (on targets) achievement levels on the measures. A program director will write a substantial interpretation and analysis of results that can be easily followed by readers, including sample sizes and actual numbers of artifacts scored as meeting targets. A new program may not have measurable data for up to several years on some measures (typically DMs), e.g. on a capstone experience.
Action Plans Action plans provide an activity sequence designed to help entities better accomplish intended outcomes/objectives. A program director will write a clear description on how the program will change, how and when the changes will be made to the program, and will include who is responsible for implementing those changes. Typical changes may be related to proposed improvements in instruction or course content, alterations of measures and targets, curricular changes, and staff changes.

A final note - assessment information (i.e. outcomes, measures, targets, results, and action plans) would typically be entered in, stored in, and printed from assessment software.

Reference

Suskie, L. (2018). Assessing Student Learning: A Common Sense Guide, 3rd Ed. San Francisco, CA: Jossey-Bass. 1119426936