A
Active follow-up - reminding sponsors or stakeholders of their planned uses for the study results to help ensure that evidence is not misinterpreted and is not applied to questions other than those that were the central focus of the assessment or evaluation.
Analysis of variance (ANOVA) - a procedure for determining whether significant differences exist between two or more sample means.
Anonymity - research participants cannot be identified on the basis of their responses.
Assessment - a process or tool integrated into the instructional activity, innovation or program designed to improve the quality of instruction and the resulting learning outcomes (see also instructional assessment )
Assessment testing – usability testing used midway in product development or as an overall usability test for technology evaluation. Evaluates real-time trials of the technology to determine the satisfaction, effectiveness, and overall usability
Assessment plan - see evaluation plan
Asynchronous learning - interaction between an instructor and students that occurs during unscheduled time periods and is usually mediated through an electronic discussion board that allows participants to post and respond to ideas, comments, and/or opinions at different times. (see also synchronous learning )
Attrition - research participants who withdraw or are removed from a studyprior to its completion.
Audience - consumers of the evaluation. Includes those who will use the evaluation and all stakeholders.
B
Baseline - the condition or situation prior to an intervention
Benchmark - to collect data on the performance of similar innovations or programs to use for comparison.
Bias - 1) a systematic distortion of research results due to the lack of objectivity, fairness, or impartiality on the part of the evaluator or assessor; 2) disparities in research or test results due to using improper assessment tools or instruments across groups.
Blackboard - an electronic course management tool that enables faculty and students to communicate and collaborate online through real-time chat forums, asynchronous discussion boards, Email, and online file exchanges. The software also features an online grade book and survey/quizzing tool.
Blended learning - learning that combines face-to-face instruction with on-line instructional resources
Bloom’s taxonomy – a classification scheme of intellectual behavior developed by Benjamin Bloom who identified six levels of cognitive learning, from the simple recall of facts (Knowledge), as the lowest level, through the increasingly more complex levels of Understanding, Application, Analysis, Synthesis, and Evaluation.
Bottom-up planning – a planning approach that begins with the data collected and systematically combines data into broader and common categories and themes. Also called inductive reasoning.
C
Case study research – a research approach that focuses on a detailed account of one or more individual cases (i.e., specific students or a specific class)
Central questions - research questions that state what specific aspects of the instructional activity , innovationor program will be examined. Questions are determined by the intended uses of the assessment or evaluation. See also research process.
Ceiling effect - The effect of an intervention is underestimated because the dependent measure (e.g., scores on an exam) cannot distinguish between participants who have somewhat high and very high levels of the construct.
Chi-square - a statistical procedure used with data that fall into mutually exclusive categories (e.g., gender) that tests whether one variable is independent of another.
Classroom Performance System (CPS) - a software/hardware system that allows instructors to ask students multiple-choice or numeric questions and receive immediate, in-class feedback using a portable receiver, student remote control response pads, computer projection equipment or response pads with LCD screens and response analysis software. Responses are anonymous unless the instructor knows the specific response pad number for each student.
Cluster sample - when the population is divided into groups (clusters) with a subset of the groups chosen as a sample. After groups are chosen, all or a sample of individuals in each group are chosen for inclusion in the study. Also called a multistage or hierarchical sample.
Coding - the process of translating raw data into meaningful categories for the purpose of data analysis . Codingqualitative data may also involve identifying recurring themes and ideas.
Comparative testing – usability testing that compares two or more instructional technology products or designs and distinguishes the strengths and weaknesses of each.
Conceptual analysis – an approach for content analysis where implicit or explicit concepts are chose for examination and the analysis involves quantifying and tallying the concepts presence within content.
Conclusion - the interpretation of study findings based upon the information gathered. Conclusions may be based onjudgments made by comparing the findings and interpretations regarding the instructional measures against one or more standards.
Confidence interval - an estimated range of values calculated from a sample that is likely to include an unknown population value.
Confidentiality - the identity of research participants are known to the researcher but are not revealed.
Confounding variable - a variable that may affect the behavior or outcome you want to examine but is not of interest for the present study.
Content analysis - the process of organizing written, audio, or visual information into categories and themes related to the central questions of the study. This approach is especially useful in product analysis and document analysis.
Context sensitivity - being aware when doing research that the persons and organizations under study have cultural preferences that dictate acceptable ways of asking questions and collecting information. Also called 'cultural sensitivity.'
Continuous variable - a variable that can take on any value within the limits the variable ranges. For example, age and temperature are continuous variables.
Control group - a group that is not subjected to an instructional activity , innovation or program so that it may be compared with the experimental group who receive the instructional intervention. Also called a comparison group.
Controlled experiment - a type of experiment in which students are randomly assigned to either an experimental group (the group that experiences the instructional stimulus) or a control group (the group that does not experience the instructional stimulus) and environmental factors are controlled in some manner.
Convenience sample - a sample of the population chosen based on factors such as cost, time, participant accessibility, or other logistical concerns. At least some consideration is typically given to how representative the sample is of the population. See also random sample
Correlation - a statistical relation between two or more variables such that systematic changes in the value of one variable are accompanied by systematic changes in the other. The relation is represented by a statistic that can vary from -1 (perfect negative correlation) through 0 (no correlation) to +1 (perfect positive correlation).
Correlational research – a procedure in which subjects’ scores on two variables are simply measured, without manipulation of any variables, to determine whether there is a relationship
Course documents - see instructional documents .
Cross-tabulation - a table that illustrates relationships between responses to two different survey questions by using response choices to one variable as column labels and response choices to a second variable as row labels.
D
Data - information gathered for the purpose of research, assessment, or evaluation.
Data analysis - systematically identifying patterns in the information gathered and deciding how to organize, classify, interrelate, compare, and display it. These decisions are guided by the central questions , the types of data available, and by input from stakeholders .
Data quality - the appropriateness and integrity of information collected and used in an assessment or evaluation.
Data quantity - the amount of information gathered for an assessment or evaluation
Data sources - documents, people and observations that provide information for the assessment or evaluation.
Deductive reasoning – a logic model in which assumptions or hypotheses are made on the basis of general principles.
Dependent variable - an observed variable in an experiment or study whose changes are determined by the presence or degree of one or more independent variables.
Dissemination - process of communicating the procedures and findings from an assessment or evaluation to relevant audiences in a timely, impartial, and consistent fashion
Document analysis - the systematic examination of instructional documents such as syllabi, assignments, lecture notes and course evaluation results. The focus of the analysis is the critical examination of the documents rather than simple description. See also content analysis.
E
Educational research - a rigorous, systematic investigation of any aspect of education including student learning, teaching methods, teacher training, and classroom dynamics.
Educational Technology - See instructional technology.
Effectiveness - the degree to which an instructional activity , innovation or program yields the desired instructional outcome . See also expected effects.
Effect size - a measure of the strength of the relationship between two variables
EGradebook - Created by CTL specifically for UT Austin faculty use, eGradebook is a Web-accessible tracking system that allows instructors and designees to electronically assign, post, and upload grades to the Registrar's office. Using UT EIDs and passwords, the system assures confidentiality in accordance with UT policy.
Evaluation - See instructional evaluation
Evaluation methods - See research methods
Evaluation plan - detailed description of how the evaluation will be implemented that includes the resources available for implementing the plan, what data will be gathered, the research methods to be used to gather the data, a description of the roles and responsibilities of sponsors and evaluators , and a timeline for accomplishing tasks
Evaluator responsibilities - typically include management of the overall project, deciding what data is necessary, deciding which research methods to use in gathering data, gathering data in a manner that complies with standard research ethics and human subjects protection, data analysis , and report writing; the evaluator may also develop appropriate timeline, and keep sponsors apprised of the project's progress and any changes to the evaluation plan .
Expected effects - refers to what the instructional activity , innovation or program is supposed to accomplish to be considered successful. See also effectiveness.
Exam blueprint - a chart listing each question in an exam and the learning objective, difficulty level, and content topic for each.
Experiment - refers to a variety of research designs that use before-after and/or group comparisons to measure the effect of an instructional activity , innovation or program . See also controlled experiments , field experiments , and single-group experiments.
Experimental group - a group that receives a treatment, stimulus or intervention in an experiment . See alsocontrol group.
Explorative testing – usability testing performed early in product development to assess the effectiveness and usability of a preliminary design or prototype, as well as users’ thought processes and conceptual understanding.
F
Factor analysis - a statistical technique that uses correlations between variables to determine the underlying dimensions (factors) represented by the variables.
Feedback devices - include a variety of formative assessment techniques based upon a learner-centered, context-specific approach to instruction, focusing primarily on qualitative responses from students. Also referred to as Classroom Assessment Techniques (CATs), examples include minute papers, one-sentence summaries, journals, student self-assessments, and narrative reactions to assignments, activities, and exams.
Field experiment - an experimental research design where students are assigned to experimental and control groups in a non-random fashion and instruction occurs in a non-laboratory setting. See also experiment ,controlled experiment , and single-group experiment.
Floor effect - the effect of an intervention is underestimated because the dependent measure artificially restricts how low scores can be.
Focused coding - the second stage of classifying and assigning meaning to pieces of information for data analysis. Coding categories are eliminated, combined, or subdivided, and the researcher identifies repeating ideas and larger underlying themes that connect codes.
Focus group - A focus group consists of a small number (8-12) of relatively similar individuals who provide information during a directed and moderated interactive group discussion. Participants are generally chosen based on their ability to provide specialized knowledge or insight into the issue under study.
Formative evaluation - study conducted during the operation of an instructional program to provide information useful in improving implementation with a focus on instruction.
G
Guided interview - a one-on-one directed conversation with an individual that uses a pre-determined, consistent set of questions but allows for follow-up questions and variation in question wording and order.
H
Human subject - a living individual about whom the investigator conducting research obtains information through intervention or interaction with the individual or identifiable private information or who is the focus of data collection for an assessment, evaluation or research study.
Hypothesis – a predictive statement about what one would expect to find or occur if a theory is correct.
I
Impact - the consequence or effect of an instructional activity , innovation or program.
Independent variable - a manipulated variable in an experiment or study whose presence or degree determines the change in the dependent variable.
Indicators - translate general concepts regarding the instruction, its context, and its expected effects into specificmeasures or variables that can be interpreted. Measurable indicators provide a basis for collecting evidence that is valid and reliable for the intended uses.
Inductive reasoning – a logic model in which general principles are developed from the information gathered.
Informal interview - a one-on-one directed conversation with an individual using a series of improvised questions adapted to the interviewee's personality and priorities and designed to elicit extended responses.
Informed consent - when the researcher provides information to participants as to the general purpose of the study, how their responses will be used, and any possible consequences of participating prior to their involvement in the study. Participants typically sign a form stating that they have been provided with this information and agree to participate in the study.
Initial coding - the first stage in classifying and assigning meaning to pieces of information for data analysis. Numerous codes are generated while reading through responses without concern for the variety of categories.
Institutional Review Board (IRB) - The IRB reviews UT Austin human subject research projects according to three principles: first, minimize the risk to human subjects (beneficence); second, ensure all subjects consent and are fully informed about the research and any risks (autonomy); third, promote equity in human subjects research (justice).
Instruction- any activity or program that supports the interaction between students, faculty and content with the aim of learning.
Instructional activity - the specific steps, strategies and/or actions used in instruction.
Instructional assessment - the systematic examination of a particular aspect of instruction (e.g., content delivery, method, testing approach, technological innovation) to determine its effect and/or how that aspect of instruction can be improved. See also Teaching assessment.
Instructional best practices - general principles, guidelines, and suggestions for good and effective teaching based upon the systematic study of instruction and learning.
Instructional context - refers to the instructional setting and environment (e.g., student demographics, social milieu, fiscal conditions, and organizational relationships) within which the instruction occurs.
Instructional design - the process of analyzing students' needs and learning goals, designing and developing instructional materials
Instructional documents - any printed or electronic materials used in instruction including syllabi, assignments, lecture notes and course evaluation results. Also called course documents.
Instructional evaluation - a holistic examination of an instructional program including the program's environment, client needs, procedures, and instructional outcomes. See also Program evaluation.
Instructional innovation - the transformation of curriculum through the integration of sound pedagogy with new technologies to improve learning.
Instructional objectives - a detailed description that states how an instructor will use an instructional activity ,innovation or program to reach the desired learning objective(s).
Instructional program - a set of policies, procedures, materials and people organized around specific instructional objectives.
Instructional technology - the process of using technology (e.g., multimedia, computers, audiovisual aids) as a tool to improve learning. The application of technology to instruction is optimized when instructors have a basic understanding of various technologies and instructional best practices . Also referred to as "educational technology."
Instructional technology assessment – the systematic examination of how technology impacts teaching and learning.
Instrument - a tool or device (e.g., survey, interview protocol), used for the purpose of assessment or evaluation. See also measure.
Instrumentation effect - a possible limitation within controlled (i.e., pre-test/post-test) experiments. Changes in the test or how the test was administered from pre-test to post-test could affect the results.
Intended uses - ways in which the information generated from an assessment or evaluation will be applied.
Iterative testing - usability testing that is repeated multiple times during different attempts and phases of the product development process.
Internal consistency - a method of establishing the reliability of a questionnaire with a single administration by examining how strongly its questions are related to one another.
Interobserver reliability - similar to interrater reliability. The level of agreement between two or more observers viewing the same activity or setting.
Interpretation - the process of determining what the findings mean and making sense of the evidence gathered.
Interrater reliability - the level of agreement (correlation of scores) between two or more raters who rate the same question, content, survey, etc. The formula for computing interrater reliability is: number of agreements /number of opportunities to agree x 100
Interventions - a product, practice, treatment or variable that can create change commonly tested by an experiment
Interview - a one-on-one directed conversation with an individual using a series of questions designed to elicit extended responses.
J
Judgments - statements concerning the merit, worth, or significance of the instructional activity , innovation orprogram that are formed by comparing the findings and interpretations regarding the instructional measures against one or more standards . Judgments are part of the conclusions step of the research process.
K
L
Learning objective - a detailed description that states the expected change in student/participant learning, how the change will be demonstrated, and the expected level of the change.
Learning outcomes - refers to the knowledge, skill or behavior that is gained by a learner after instruction is completed and may include the acquisition, retention, application, transfer, or adaptability of knowledge and skills.
Linear regression - statistical technique that defines a line that best fits a set of data points and predicts the value of an outcome variable from the values of one or more continuous variables.
M
Maturation - any mental, physical, or emotional change that occurs throughout development (or maturity). This change could affect participants’ performance on the dependent variable of interest.
Measure - 1) an definition of quality or capacity used to assess or evaluate; 2) an instrument used to collect information; 3) the act of collecting information
Measures - instruments, devices, or methods that provides data on the quantity or quality of the independent or dependent variables
Meta-analysis – the process of synthesizing and describing results from a large number of similar studies.
Method - See research methods.
Mixed method research - research using both qualitative and quantitativedata gathering techniques .
Mixed research combines or mixes quantitative and qualitative research techniques in a single study. Two sub-types of mixed research includes mixed method research— using qualitative and quantitative approaches for different phases of the study— and mixed model research—using quantitative and qualitative approaches within or across phases of the study.
Multiple measures - See triangulation.
N
Narrative description - a written description of an instructional product , instructional document , or classroomobservation . While they may follow a common format or style within a particular project, the focus is on creating an accurate, written detail of the phenomena under study.
Narrative descriptions for usability testing - test participants in a usability study are asked to describe their experience about a certain task or the overall product via writing or speaking
Non-probability sample - a sample where there is no way to estimate the probability that each member of the population will be included in the sample and there is no guarantee that each member of the population will have the same chance of being included. See also sample.
Non-response bias - a low response rate creates bias towards describing the sample, ignoring those who did not respond.
Normal distribution - when a group of scores is symmetric with more scores concentrated in the middle than at the ends. Also called a bell curve.
O
Observation - refers to the systematic surveillance of classroom instruction with the goal of identifying instructional needs/challenges, describing the instructional activity , innovation or program , or evaluating a change in instructional practice. Also referred to as "classroom observation."
Ongoing Course Assessment (OCA) - web-based survey tool that allows UT-Austin instructors to create instructional assessment instruments to collect anonymous feedback from their students at any time during the semester within a secure environment.
Operational definition - a specific statement about how an event or behavior will be measured to represent the concept under study. See also indicator and research methods.
Outcome - the effect or change resulting from an instructional activity , innovation or program . See alsoperformance-based outcome.
Outcome measure - an instrument, device, or method that provides data on the quantity or quality of the result or product of the experiment; an outcome is the dependent variable of the experiment.
P
Participants - 1) individuals from whom information is being gathered in an assessment or evaluation; 2) the individuals or group under study. Sometimes referred to as "study subjects." See also human subjects .
Pareto chart - a specialized chart useful for non-numeric data that ranks categories from most frequent to least frequent. Bars are arranged in descending order of height from left to right.
Peer review - 1) the process of one student providing feedback to another student as part of the instruction ; 2) the process of one instructor providing feedback to another instructor about their teaching, usually through observation.
Performance-based outcome - learner outcomes based on standards that are measurable; often demonstrated through products or behaviors.
Population - the largest group under study that includes all individuals meeting the defined characteristics.
Portfolio - collection of documents or products for the purpose of representing capabilities, skill improvement or change over time
Post-task surveys - a survey used in usability testing. It is given immediately after a certain task is completed in order to gain specific feedback of the task or how participants’ perceptions change over time.
Post-test - a means to measure knowledge or ability after an instructional activity , innovation or program is implemented, using one or more research methods . Also sometimes referred to as a "post-assessment."
Post-test surveys - a survey used in usability testing that addresses participants’ overall perception of the product such as satisfaction or ease of use
Practical significance - a conclusion determined by an effect size statistic that indicates a research finding is practically important or useful in real life.
Pre-test - a means to measure existing knowledge or ability prior to the implementation of an instructional activity, innovation or program.
Probability sample - when each member of the population has a specified likelihood of being chosen. See alsosample.
Product - any student work designed to demonstrate learning.
Product analysis - assessment of student learning through the examination of student products such as student portfolios, assignments, or writings. The type of analysis used may range from a comparison of student products with pre-determined quality standards , to narrative descriptions , and subjective assessments of quality.
Program - See instructional program.
Program description - a description of the mission and objectives of the instructional program that includes theinstructional activity or innovation being evaluated or assessed, a statement of need , the expected effects , available resources , the program's stage of development, and the instructional context .
Program evaluation - a holistic examination of an instructional program including the program's environment, client needs, procedures, and instructional outcomes.
Q
Qualitative data - nonnumeric information such as conversation, text, audio, or video.
Qualitative research follows an inductive research process and involves the collection and analysis of qualitative (i.e., non-numerical) data to search for patterns, themes, and holistic features.
Qualitative research methods - research methods that focus on gathering nonnumeric information using focus groups , interviews , document analysis , and product analysis.
Quantitative data - numeric information including quantities, percentages, and statistics.
Quantitative research follows a deductive research process and involves the collection and analysis of quantitative (i.e., numerical) data to identify statistical relations of variables.
Quantitative research methods - research methods that focus on gathering numeric information, or nonnumeric information that is easily coded into a numeric form, such as a survey.
Quasi-experiment - See field experiment .
Quota sample - a sample created by gathering a predefined number of participants from each of several predetermined categories. The selection process within each category may be random. E.g., dividing a class into groups of males and females and randomly selecting twenty-five participants from each category. See also random sampleand stratified sample.
R
Random assignment - An experimental technique for randomly assigning participants to different treatments or groups.
Random sample - a subset of the population in which every member of the population has an equal likelihood of being selected. See also sample , quota sample , and stratified sample.
Range - measure of dispersion reflecting the difference between the largest and smallest scores in a set of data
Ranking - ordering or sequencing a level of performance compared to a set criterion or how others perform
Rating - a systematic estimation of the degree of some attribute based on a numerical or descriptive continuum
Recommendations - actions for consideration resulting from the assessment or evaluation that go beyond simply forming judgments about the efficacy of an instructional activity , innovation or program . Recommendations may include suggestions for changing existing instructional activities and a plan for ongoing assessment.
Relational analysis – an approach for content analysis where implicit or explicit concepts are identified and the analysis involves exploring the relations between/among concepts within the content.
Reliability - the consistency of a measure, instrument, or observer. An instrument is said to have high test-retest reliability if it yields similar results when given to the same sample at two different times.
Reliable - see Reliability.
Research context - environmental factors that may influence the research process and/or the instructional outcomes under study including geographic location, the physical environment, time of day, social factors, and demographic factors (e.g., age, sex, income).
Research design - a plan outlining how information is to be gathered for an assessment or evaluation that includes identifying the data gathering method(s) , the instruments to be used/created, how the instruments will be administered, and how the information will be organized and analyzed.
Research methods - systematic approaches to gathering information that rely on established processes and procedures drawn from scientific research techniques, particularly those developed in the social and behavioral sciences. Examples include surveys , focus groups , interviews , and observation . Sometimes referred to as "evaluation methods" or "assessment methods."
Research process - the ordered set of activities focused on the systematic collection of information using accepted methods of analysis as a basis for drawing conclusions and making recommendations.
Research question - a question that specifically states what the researcher will attempt to answer.
Resources - refers to the time, human skills and knowledge, technology, data, money, and other assets available to conduct an assessment or evaluation.
Respondent - an individual who participates in the assessment or evaluation process by providing information using an instrument provided by the evaluator. See also participant and human subject.
Response rate - the number of individuals who respond to an instrument divided by the total number of individuals contacted.
Response scale - a set of ordered choices that are provided to answer a survey question or rate an observed behavior.
Rubric - a systematic scoring guideline to evaluate behaviors, documents or performance through the use of detailed performance standards.
S
Sample - a defined subset of the population that is chosen based on its ability to provide information, its representativeness of the population under study, and/or factors related to the feasibility of data gathering such as cost, time, participant accessibility, or other logistical concerns. See also cluster sample, non-probability sample,probability sample, random sample, quota sample , stratified sample and convenience sample.
Self-assessment - the process of judging one's own performance for the purpose of self-improvement.
Self-report instrument - an instrument through which individuals record their own recollections, feelings, judgments and attitudes about an event or phenomenon.
Single-group experiment - a type of experiment where a pre-test and a post-test are used to measure the effect of an instructional activity , innovation , or program , on a single sample with no control group.
Sponsor - the person or organization that has hired or directed the evaluator to undertake a project. The sponsor is always a stakeholder.
Sponsor responsibilities - are the activities in a program evaluation for which the sponsor is expected to assist with or perform. These responsibilities include assisting directly in formulating the research design and creating theevaluation plan , providing access to relevant information such as documents, personnel, or students, and providing information and/or insight about instructional objectives and program operations.
Stakeholders - the individual(s) and organization(s) that will be affected by the results of the assessment or evaluation. Stakeholders may include individuals involved in program operations, those served or affected by the program, and the intended users of the assessment or evaluation. The project sponsor is always a stakeholder.
Stakeholder needs - generally reflect the central questions the stakeholders have about the instructional activity, innovation or program . Determining stakeholder needs helps the researcher to focus the project so that the results are of the greatest utility.
Standard deviation - a measure of the variation between individuals on a variable.
Standards - are measurable criteria that provide the basis for forming judgments concerning the performance of an instructional activity , innovation or program.
Standardized interview - a one-on-one directed conversation with an individual in which the interviewer asks the same questions with the same wording in the same order for all interviewees.
Statement of need - description of the problem or opportunity addressed by the instructional activity ,innovation or program.
Stratified sample - when the population is divided into categorical subgroups (strata) with the sample chosen from each subgroup in proportion to its size in the population. E.g., the population is 60% female and 40% male so a stratified sample would have 60% of its member chosen from the female group and 40% of its members chosen from the male group. The selection process within each subgroup may be random. See also random sample and quota sample.
Statistical power - the ability of an experimental design or inferential statistic to detect an effect of a variable.
Statistical significance - the demonstration that the probability of obtaining a finding by chance alone is relatively low.
Student assessment - the evaluation of student learning through assignments, exams, and portfolios.
Subjective assessment - an assessment of quality where there is no pre-established measure or standard and is thus based solely on the opinion of the evaluator.
Subjects - See participants, respondents, or human subjects.
Summated scale - a measure that consists of a collection of questions intended to reveal the level of a theoretical variable that are not readily measurable by a single question.
Summative evaluation - a study conducted at the end of an instructional program or program cycle to provide decision makers or potential consumers with judgments about the program's merit with a focus on making decisions about program continuation, termination, expansion or adoption
Survey - an ordered series of questions about attitudes, behaviors or personal characteristics administered to individuals in a systematic manner.
Synchronous learning - an on-line communication tool, instructor-to-student or student-to-student, that occurs at the same time but not necessarily in the same place; similar to electronic "chat"
Synthesis - combining data and information from multiple sources, or of ratings and judgments on separate scoring dimensions in order to arrive at a result or conclusion.
Systematic sampling – when the sample is selected from the population at a regular/systematic interval (e.g., every 5th participant from a subject pool is selected).
T
t-test – a data analysis procedure that assesses whether the means of two groups are statistically different from each other
Teaching assessment - the systematic examination of a particular aspect of instruction (e.g., content delivery, method, testing approach, technological innovation) to determine its effect and/or how that aspect of instruction can be improved.
Technology Delivered Instruction (TDI) - instruction presented or facilitated through the use of instructional technology
Test monitor– An individual who directly observes and records data from an usability test
Test-retest reliability - a method of assessing the reliability of a questionnaire by administering the same or parallel form of a test repeatedly.
Testing effect A possible limitation within controlled (i.e., pre-test/post-test) experiments. Participants perform better the second time they are measured using a same or similar test (i.e., post-test measure) because of practice, familiarity, or awareness
Theory Into Practice (TIP) - when theory is applied to a real-world instructional context.
Top-down planning - a planning approach that begins with a broad theme and systematically reduces the theme level by level into categories. Also called deductive reasoning.
Triangulation - using multiple research methods to gather information or multiple sources of information on one topic or research question usually with the intent of improving reliability and/or validity . Sometimes referred to as using "multiple measures."
U
Usability - the ease of use, learnability, efficiency, and error tolerability of a particular product.
V
Validated scale - a collection of questions intended to identify and quantify constructs based on educational theory that are not readily observable such as knowledge, abilities, attitudes, or personality traits.
Validation - the process of gathering evidence to provide a scientific basis for proposed score interpretations from ameasure or an instrument.
Validity - the degree to which the theory and information gathered support the proposed uses and interpretations of ameasure or an instrument.
No comments:
Post a Comment