Tuesday, June 28, 2011


English for Young Learners:
Lesson Planning and Classroom Management

by Ignasia Yuyun

INTRODUCTION

Management skills for teachers are very important to support successful teaching learning activities in classroom. Therefore, providing a structured environment where children consistently know what to expect, is the key to classroom management. Although, specific techniques change from teacher to teacher, the common denominator is the understanding that when students are well managed, there exists a better opportunity for everyone to learn.
Teachers create lesson plans to communicate their instructional activities regarding specific subject-matter. Almost all lesson plans developed by teachers contain student learning objectives, instructional procedures, the required materials, and some written description of how the students will be evaluated. Many experienced teachers often reduce lesson plans to a mental map or short outline. New teachers, however, usually find detailed lesson plans to be indispensable. Learn to write good lesson plans - it is a skill that will serve you well as a teacher. If you're really serious, become proficient in writing effective learning objectives. All lesson plans begin, or should begin with an objective. Toward that end, teachers have developed a self-instructional, interactive program that teaches this important skill within the context of lesson planning.

A. CLASSROOM MANAGEMENT

Classroom management skills are related to three main areas: creating and maintaining motivation, maintaining classroom control and discipline and organizing learning activities. Classroom management will also be influenced by teaching styles, the amount of pupil independence that is acceptable in the context, the amount of competition and cooperation the teacher establishes in the class and the role and the use of the L1 in the class.
1. Motivation
                Motivation refers to feelings, a goal, a mental process, a certain type of behavior or a personal characteristic. More recently, motivation has been seen as a set of beliefs, thoughts and feelings that are turned into action. According to Dornyei (1998:117), “motivation has been widely accepted by both teachers and researchers as one of the key factors that influence the rate and success of second/foreign language (L2) learning”. In line with this, Cajkler and Addelman (2000), suggest that in order to keep levels of motivation high, language teachers should adopt a ‘critical attitude’ to the activities and tasks they use and the expectations they create. This is done so as to develop a healthy questioning of the work they prepare for their pupils and the schemes of work they follow.
                We have already referred to the need to provide a classroom atmosphere which promotes pupils’ confidence and self-esteem so that they can learn more effectively and enjoyably. This echoes two key factors in motivating learners that Dornyei describes. The first is how far a learner expects to be successful in doing the task, the second is how much the learner thinks being successful in doing the task is important.

2. Classroom control and discipline
Here we shall consider five main areas that help to create an effective learning environment.
a.    Establishing routines
                When children enter school they are faced with a new set of social routines and relationships which even a kindergarten may not have prepared them for. According to Nelson (1977) children develop scripts, or mental maps, to understand routines in their lives in the same way that adults do.
Young children gradually become familiar with established classroom routines that help to make them feel confident. Anxious or immature learners will tend to react negatively to changes in the normal classroom pattern, so it is a good idea to develop familiar patterns with young learners in their first year of schooling.
The children may become very bewildered and uncomfortable if the teacher talks to them all the time in a strange foreign language. Gradually introducing pupils to use English for a short period of times through songs or rhymes will help to ease them in slowly.
b. Finding a balance
                Finding the right balance between order and flexibility is very important. The most effective environment for learning is often found in a classroom where the teacher is firm but kind and encouraging so that pupils, especially very young children, feel confident and happy.
                One way of establishing this is quickly getting to know pupils’ names, as this will help to create a secure and friendly atmosphere. It will also enable the teacher to control and discipline the class much more effectively.
c. Getting the pupils’ attention
                With young pupils teachers may need to establish a signal for getting the pupils’ attention. When teachers want to gain the attention of the whole class by following these steps:
·         Firmly name the children still talking.
·         Start a well-known activity or routine or give instructions for a new activity to keep the pupils’ attention.
·         Wait for quiet before beginning a new activity.
·         Once there routines have become established you should be able to cut down on the amount of time teachers spend disciplining pupils.
d. Finding an acceptable noise level
                Most language teachers would find this acceptable, as long as the talk is ‘on task’. If the noise level rises too much, pick out the noisiest group, name on of the children in the group and gesture them to quieten down. Remember that the noisier the teacher is, the noisier the children will become.
e. Giving praise
                Teachers can quickly establish good relationship with pupils by praising good behaviors, commenting on good work, making helpful suggestions and encouraging pupils’ efforts. This is important in setting the right atmosphere, providing a good model for children to follow and boosting pupils’ confidence and self-esteem.
                What teachers are trying to do when giving praise is pin-point what teachers like by being specific, give praise with sincerity and enthusiasm in variety of ways. Here, teachers can use praise consistently and frequently, especially when pupils are first learning something. Teachers also can praise groups or the whole class as well as individuals. At last, vary to whom teachers give praise and look for and name at least two children who are doing what teachers want to avoid ‘favoritism’.

3. Organizing learning activities
                When children endlessly repeat activities on the same topics or when language activities are pitched at the wrong level or are too mechanical, they are liable to become frustrated and noisy. In some contexts pupils’ main motivation is to pass English test and they may be less willing to engage in activities which they think so not prepare them for these. Therefore, teachers must determine an appropriate balance between teaching to the tests and other language learning.
a. Dealing with bilingual pupils
                If these pupils’ learning needs are not catered for, they may become bored or disruptive, which is a pity as their skills can be seen as a bonus. Use strategies for encouraging these children to: ‘show and tell’ some of their experiences in the country of the target language; try to explain the instructions for games to groups or even act as the teacher on demonstrating a game; help others in groups; make recordings of stories or other listening activities; write stories, instructions or descriptions for other pupils to read and act upon; make games which require sentence cards pr board games and ‘Chance’ cards: test pupils, e.g. spelling and complete individualized work at a higher level.
b. Managing pair and group work
                Berman (1998) suggests that very young learners prefer working alone and can be reluctant to share. For some activities it is often easier and more fruitful to organize work in pairs than in groups where pupils can easily work with the person next to or behind them.
                There are several ways of organizing groups to work together. The easiest is to ask pupils who sits near one another to form a pair or group. Another method is to use the children’s choice. Such friendship groups are probably the most popular with pupils and these may work well. Other ways of organizing pupils into groups include choosing group members using features of a project the pupils may be doing, or language they have just learned.
c. The effect of different kinds of classroom activities
                Activities which usually engage and stir pupils are those where the learners are physically or mentally active and thus more involved in their learning. These include critical thinking activities, physical activities, and calm activities.
                Here are some general principles for using stir and settle activities:
·         Start a lesson with a settling activity to calm pupils down if they seem very lively or restless.
·         Make sure lively, stirring work returns to something calmer and more settling.
·         Make sure everyone has something to do, especially in group work.
·         Avoid activities which are emotionally or intellectually ‘empty’ or meaningless.
·         Try not to have a sequence of only settling or stirring activities throughout the whole class.
d. The mixed ability class
                Many textbooks assume that all pupils are at the same language level, whereas the average classroom is normally of very mixed ability. The following are the checklists that may help teachers to pinpoint difficulties which may have arisen because of the organization of learning activities:
·         Was the task given to pupils too difficult?
·         Was the task rather boring and mechanical with too little contextualization or focus on meaning?
·         Was the task too easy?
·         Was there too much ‘dead time’?
In each lesson there should be a core of the most important concepts, skills and language that should be straightforward enough for everyone to do. Teachers may then need extension activities to challenge the more able pupils and more support activities for the less able. Making activities that cater for different levels is called mixed ability teaching or ‘differentiation’. To do this successfully, teachers can organize differentiated learning activities by considering the following seven key factors: the text used, the task used, the support provided, the outcome demanded, the ability group used, the range of activities used, and the choice of activity.
When teachers are doing in providing more support is choosing a selection from the following kinds of scaffolding:
·         Breaking down the learning sequence into smaller steps
·         Simplifying the language, narrowing the range of possibilities
·         Using more spoken language before moving onto written language
·         Translating abstract concepts into more concrete ones
·         Using physical movement
·         Using more audio-visual support
·         Providing a greater variety of activities
e. Time management
                It is very useful to plot realistic timings for the completion of certain activities; this avoids having to rush, which may lead to inattention of ineffective learning.
When ending a lesson, here are several points to bear in mind:
·         Plan
·         Finish work on the main teaching point a little early rather than late
·         Take time to explain homework beforehand and give an example
·         Plan a teacher-led review session at the end of each class
f. Classroom organization and layout
                Careful planning of classroom is very important as it helps to create an organized and secure atmosphere. There are six points to consider:
·         A grid plan made to scale is especially useful if you have a large class squeezed into a small area
·         Think carefully about whether you want the children to sit in rows or groups
·         If you decide to have a ‘teaching base’, make sure you have a clear view of the whole room
·         A story corner for younger children is also a good idea.
·         You may also like to include a listening or computer corner which is screened off by cupboards or screens to provide a quiet corner for listening to cassette of stories or for a computer activity
·         Make sure you include some areas to display children’s work, using notice-boards, screens or a table
g. Keeping teaching records
                Teaching records are a kind of teaching log, memory aid, or remainder of the language points or the stories and topics which have been covered in a term.

B. LESSON PLANNING

Rivers (1981: 484) reminds us that ‘ A lesson is not a haphazard collection of more or less interesting items, but a progression of interrelated activities which reinforce and consolidate each other in establishing the learning towards which the teacher is directing his or her efforts’.
Ur (1996: 213) describes a lesson as follows: ‘A lesson is a type of organized social event that occurs in virtually all cultures. Lessons in different places may vary in topic, time, place, atmosphere, methodology and materials, but they all, essentially, are concerned with learning as their main objective, involve the participation of learner(s) and teacher(s), and are limited and pre-scheduled as regards time, place, and membership’.
1.    What is a good lesson?
A good lesson is adaptable and flexible; is a back-up system; has clear objectives; has a variety of activities, skills, interaction, materials; caters for individual learning styles; has interesting, enjoyable content; has an appropriate level of challenge and is well prepared, well planned and well timed.
In line with this, Kizlik stated that good lesson plans do not ensure students will learn what is intended, but they certainly contribute to it. Think of a lesson plan as a way of communicating, and without doubt, effective communication skills are fundamental to all teaching. Lesson plans also help new or inexperienced teachers organize content, materials, and methods. When you are learning the craft of teaching, organizing your subject-matter content via lesson plans is fundamental. Like most skills, you'll get better at it the more you do it and think of ways of improving your planning and teaching based on feedback from your students, their parents, and other teachers. Developing your own lesson plans also helps you "own" the subject matter content you are teaching, and that is central to everything good teachers do. (http://www.adprima.com/wlo5.htm)

2. Why plan a lesson?
Children learn more easily when they know what to expect in a lesson and what the teacher expects of them. It makes them feel more secure and more confident. It also enables them to predict situations and the language and behavior likely to be used in them.
A well-planned lesson makes a teacher feel more confident and professional. A lesson planned in advance in all respects, with clear aims, clear statements of how the aims are going to be achieved, how time will be managed for each stage, how the class is to be arranged, which visual aids and technical aids will be used, material prepare in advance, means that a teacher can give full attention to the pupils before, during and after the lesson, and to parents should they have contact with them.
The process of reflection helps teachers monitor their teaching and identify their strong and weak points as well as evaluate their pupil’s learning and form the basis for future planning.
Finally, lesson planning provides accountability by providing a record of work which can be shown to school authorities, inspectors and parents, or used by another teacher who may have to substitute for the class.
3.    What is involved in the lesson planning process?
a. Syllabus
                A syllabus provides a list of the language items that are to be taught, how they are to be taught in which order, and how long it should take to teach them. The syllabus is provided through the contents page, the course map or a scope and sequence chart. Besides, the accompanying teacher’s guide to a course will usually provide detailed guidelines on how to teach each lesson than those suggested.
                Initially, less experienced teachers are likely to follow a plan closely but, with more experience, will learn to adapt course books and lessons in a much more flexible way according to the pupils’ needs and interests.
b. Learners’ needs
                The needs of the children and how they learn must be considered first so that teachers achieve a balance between the language aims of the syllabus and the needs of the children, which involve their all round general education.
A major consideration when planning a lesson is how to provide optimal conditions for learning so children are motivated and interested in learning, understand what they are being asked to do and why, get plenty of meaningful exposure to the language, get plenty of variety and are allowed to work at their own pace, experience success, feel confident and secure to try out language, have plenty of opportunities to use language, and opportunities to review and reflect on what they have done and why.
Other aspects to be considered are the linguistic and cognitive demands of language activities. We need to ensure that the tasks we ask our pupils to carry out in the language classroom are ones with reasonable degree of effort or challenge, can be completed successfully. This means that we need to be able to evaluate tasks and materials in terms of the linguistic and cognitive demands they make on our learners, and to be aware of the kinds of tasks pupils can cope with at specific stages of their development.
c. Content areas, materials, and methodology
                Content areas provide the material as well as suggest the way things should be taught. Teachers need to evaluate whether the subject matter or content, material and the methodology is entirely appropriate for pupils.
                Methodological preparation can help pupils understand the reasons for choosing a certain methodology, decide to modify this slightly by adapting and supplementing materials to keep more in line with their expectations.
4.    How can I structure a lesson, select, sequence, and time activities?
The typical structure of most lessons consists of three main stages: a beginning, a middle and an end.
The selection and sequencing of activities throughout a lesson needs careful consideration.
·         Activities that settle children either positively in the sense of calming them or negatively by boring them into some kind of unresponsive stupor.
·         Activities that stir pupils in the sense of either stimulating or unsettling them.
·         Teachers need to know more than what language learning it will encourage.
·         Teachers must be aware of what general behavior it is likely to encourage, it will help teachers judge if the activity or sequence of activities is a good choice for a particular lesson or group of pupils.
·         Teachers need to consider the involvement factor when selecting and sequencing activities.
·         Teachers need to think in term of variety, firstly think of how teachers can offer variety and then how best to combine different activities (types of activities, types of interaction, language skill, tempo/pace, stir/settle, involve/ occupy, difficulty, level of pupil responsibility, classroom arrangement, materials).
The followings are some general guidelines teachers may like to consider are:
·         Begin and end lesson so that children perceive their English lesson as an ‘event’ which has a specific structure: a beginning, middle and an end.
·         Depending on how long lesson is, consider putting harder activity earlier as pupils will probably be fresher and more energetic.
·         Decide at which point it is best for class to be lively
·         Think carefully about transitions from one stage or activity cycle to the next.
·         End on a positive note.
Good time management skills facilitate the smooth running of a lesson. Knowing about the linguistic and cognitive demands that certain lessons make on your pupils will help you judge how long an activity is likely to take.

5.       How can I write a lesson plan?
There is no ‘correct’ way to write a lesson plan, but it should give a clear picture of what you intend to do (aims) and how you intend to achieve them (procedures).
a.       Use a lesson planning/record sheet


Date ________________ Class _________________ length of lesson ___________________
Materials ____________________________________________________________________


             Aims                                                        Plan (what I intend to do)         Record (what I actually did)
Grammatical structures
Functions
Vocabulary
Pronunciation
Skills
Learning to learn
Other: social, psychological,
Cultural, educations/cross-
curricular, citizenship
education
Classroom management
Assumption
Anticipated difficulties
Evaluation:
Did I achieve my aim?
What worked well? Why?
Why not? What would I do
differently next lesson?
Why?


b. Decide how to achieve aims
The procedures below applies the Plan-Do-Review provides clearly defined stages and combines the development of meta-cognitive and cognitive strategies. The different stages on the plan include:
·         Plan: Beginning the lesson
-      Warm-up
-      Reviewing of work covered in previous lesson
-      Informing pupils of the lesson aims
·         Do: Activity cycle
-      Plan: Activity cycle (s)
-      Do: Activity cycle(s)
-      Review: Activity cycle(s)
·         Review: Ending the lesson


6.       How can I evaluate a lesson?
To help in evaluating lessons teachers may like to tape or video record the lessons, ask pupils to comment on the lessons or invite a colleague to sit in on a lesson and observe. Afterwards, answer the following questions individually:
  • Did I achieve the aims stated on my lesson plan? If not, why not?
  • Was my lesson different from my plan in any way? How and why?
  • How did I move from one stage of the lesson to the next? What did I say to the class?
  • Did I keep to my timing? If not, why not?
  • Were my pupils active and involved in the lesson? Why? Why not?
  • Did my pupils learn what I set out to teach? How do I know?
  • Did my pupils respond positively to the materials and in English?
  • Were there any problems? If yes, why?
  • What would I do differently next time? Why?
  • What did I do better this time than ever before?
Then, come together and compare the comments. Finally, lesson planning differs from teacher to teacher and each teacher has their own preferred way of planning a lesson.

III. CONCLUSION

            Teachers are expected to be able to manage classroom effectively in order to run teaching learning activities successfully. Here, teachers need to know some effective techniques in managing classroom. Besides, teachers are expected to create their own lesson plans, it means teachers have taken a giant step toward "owning" the content they teach and the methods they use, and that is a good thing. Acquiring this skill is far more valuable than being able to use lesson plans developed by others. It takes thinking and practice to hone this skill, and it won't happen overnight, but it is a skill that will help to define someone as a teacher. Knowing "how to" is far more important than knowing "about" when it comes to lesson plans, and is one of the important markers along the way to becoming a professional teacher. The corollary is, of course, that there is no one "best way" to plan lessons. Regardless of the form or template, there are fundamental components of all lesson plans that teachers should learn to write, revise, and improve. The old adage, "Practice doesn't make perfect; perfect practice makes perfect" is at the core of learning this skill.


Brewster, J and Ellis, G. 2002. The Primary English Teacher’s Guide. England: Pearson Education Limited.
Laslett, R and Smith, C. 2002. Four rules of class management, in Pollard, A(ed). 2002. Readings for Reflective Teaching, pp218–21 (Reading 11.4). London
Linse, C. T. 2005. Practical English Language Teaching: Young Learners. New York: McGraw-Hill Companies, Inc.
Paul, D. 2003. Teaching English to Children in Asia. Hongkong: Pearson Education Asia Limited.



Friday, June 24, 2011

Research Glossary


 A
Active follow-up - reminding sponsors or stakeholders of their planned uses for the study results to help ensure that evidence is not misinterpreted and is not applied to questions other than those that were the central focus of the assessment or evaluation.
Analysis of variance (ANOVA) - a procedure for determining whether significant differences exist between two or more sample means.
Anonymity - research participants cannot be identified on the basis of their responses.
Assessment - a process or tool integrated into the instructional activity, innovation or program designed to improve the quality of instruction and the resulting learning outcomes (see also instructional assessment )
Assessment testing – usability testing used midway in product development or as an overall usability test for technology evaluation.  Evaluates real-time trials of the technology to determine the satisfaction, effectiveness, and overall usability
Assessment plan - see evaluation plan
Asynchronous learning - interaction between an instructor and students that occurs during unscheduled time periods and is usually mediated through an electronic discussion board that allows participants to post and respond to ideas, comments, and/or opinions at different times. (see also synchronous learning )
Attrition - research participants who withdraw or are removed from a studyprior to its completion.
Audience - consumers of the evaluation. Includes those who will use the evaluation and all stakeholders.
B
Baseline - the condition or situation prior to an intervention
Benchmark - to collect data on the performance of similar innovations or programs to use for comparison.
Bias - 1) a systematic distortion of research results due to the lack of objectivity, fairness, or impartiality on the part of the evaluator or assessor; 2) disparities in research or test results due to using improper assessment tools or instruments across groups.
Blackboard - an electronic course management tool that enables faculty and students to communicate and collaborate online through real-time chat forums, asynchronous discussion boards, Email, and online file exchanges. The software also features an online grade book and survey/quizzing tool.
Blended learning - learning that combines face-to-face instruction with on-line instructional resources
Bloom’s taxonomy – a classification scheme of intellectual behavior developed by Benjamin Bloom who identified six levels of cognitive learning, from the simple recall of facts (Knowledge), as the lowest level, through the increasingly more complex levels of Understanding, Application, Analysis, Synthesis, and Evaluation.
Bottom-up planning – a planning approach that begins with the data collected and systematically combines data into broader and common categories and themes. Also called inductive reasoning.
C
Case study research – a research approach that focuses on a detailed account of one or more individual cases (i.e., specific students or a specific class)
Central questions - research questions that state what specific aspects of the instructional activity , innovationor program will be examined. Questions are determined by the intended uses of the assessment or evaluation. See also research process.
Ceiling effect - The effect of an intervention is underestimated because the dependent measure (e.g., scores on an exam) cannot distinguish between participants who have somewhat high and very high levels of the construct.
Chi-square - a statistical procedure used with data that fall into mutually exclusive categories (e.g., gender) that tests whether one variable is independent of another.
Classroom Performance System (CPS) - a software/hardware system that allows instructors to ask students multiple-choice or numeric questions and receive immediate, in-class feedback using a portable receiver, student remote control response pads, computer projection equipment or response pads with LCD screens and response analysis software. Responses are anonymous unless the instructor knows the specific response pad number for each student.
Cluster sample - when the population is divided into groups (clusters) with a subset of the groups chosen as a sample. After groups are chosen, all or a sample of individuals in each group are chosen for inclusion in the study. Also called a multistage or hierarchical sample.
Coding - the process of translating raw data into meaningful categories for the purpose of data analysis . Codingqualitative data may also involve identifying recurring themes and ideas.
Comparative testing – usability testing that compares two or more instructional technology products or designs and distinguishes the strengths and weaknesses of each. 
Conceptual analysis – an approach for content analysis where implicit or explicit concepts are chose for examination and the analysis involves quantifying and tallying the concepts presence within content.
Conclusion - the interpretation of study findings based upon the information gathered. Conclusions may be based onjudgments made by comparing the findings and interpretations regarding the instructional measures against one or more standards.
Confidence interval - an estimated range of values calculated from a sample that is likely to include an unknown population value.
Confidentiality - the identity of research participants are known to the researcher but are not revealed.
Confounding variable - a variable that may affect the behavior or outcome you want to examine but is not of interest for the present study.
Content analysis - the process of organizing written, audio, or visual information into categories and themes related to the central questions of the study. This approach is especially useful in product analysis and document analysis.
Context sensitivity - being aware when doing research that the persons and organizations under study have cultural preferences that dictate acceptable ways of asking questions and collecting information. Also called 'cultural sensitivity.'
Continuous variable - a variable that can take on any value within the limits the variable ranges. For example, age and temperature are continuous variables.
Control group - a group that is not subjected to an instructional activity , innovation or program so that it may be compared with the experimental group who receive the instructional intervention. Also called a comparison group.
Controlled experiment - a type of experiment in which students are randomly assigned to either an experimental group (the group that experiences the instructional stimulus) or a control group (the group that does not experience the instructional stimulus) and environmental factors are controlled in some manner.
Convenience sample - a sample of the population chosen based on factors such as cost, time, participant accessibility, or other logistical concerns. At least some consideration is typically given to how representative the sample is of the population. See also random sample
Correlation - a statistical relation between two or more variables such that systematic changes in the value of one variable are accompanied by systematic changes in the other. The relation is represented by a statistic that can vary from -1 (perfect negative correlation) through 0 (no correlation) to +1 (perfect positive correlation).
Correlational research – a procedure in which subjects’ scores on two variables are simply measured, without manipulation of any variables, to determine whether there is a relationship
Course documents - see instructional documents .
Criterion - a measure or standard by which a judgment is made
Cross-tabulation - a table that illustrates relationships between responses to two different survey questions by using response choices to one variable as column labels and response choices to a second variable as row labels.
D
Data - information gathered for the purpose of research, assessment, or evaluation.
Data analysis - systematically identifying patterns in the information gathered and deciding how to organize, classify, interrelate, compare, and display it. These decisions are guided by the central questions , the types of data available, and by input from stakeholders .
Data quality - the appropriateness and integrity of information collected and used in an assessment or evaluation.
Data quantity - the amount of information gathered for an assessment or evaluation
Data sources - documents, people and observations that provide information for the assessment or evaluation.
Deductive reasoning – a logic model in which assumptions or hypotheses are made on the basis of general principles.
Dependent variable - an observed variable in an experiment or study whose changes are determined by the presence or degree of one or more independent variables.
Dissemination - process of communicating the procedures and findings from an assessment or evaluation to relevant audiences in a timely, impartial, and consistent fashion
Document analysis - the systematic examination of instructional documents such as syllabi, assignments, lecture notes and course evaluation results. The focus of the analysis is the critical examination of the documents rather than simple description. See also content analysis.
E
Educational research - a rigorous, systematic investigation of any aspect of education including student learning, teaching methods, teacher training, and classroom dynamics.
Educational Technology - See instructional technology.
Effectiveness - the degree to which an instructional activity , innovation or program yields the desired instructional outcome . See also expected effects.
Effect size - a measure of the strength of the relationship between two variables
EGradebook - Created by CTL specifically for UT Austin faculty use, eGradebook is a Web-accessible tracking system that allows instructors and designees to electronically assign, post, and upload grades to the Registrar's office. Using UT EIDs and passwords, the system assures confidentiality in accordance with UT policy.
Evaluation - See instructional evaluation
Evaluation methods - See research methods
Evaluation plan - detailed description of how the evaluation will be implemented that includes the resources available for implementing the plan, what data will be gathered, the research methods to be used to gather the data, a description of the roles and responsibilities of sponsors and evaluators , and a timeline for accomplishing tasks
Evaluator responsibilities - typically include management of the overall project, deciding what data is necessary, deciding which research methods to use in gathering data, gathering data in a manner that complies with standard research ethics and human subjects protection, data analysis , and report writing; the evaluator may also develop appropriate timeline, and keep sponsors apprised of the project's progress and any changes to the evaluation plan .
Expected effects - refers to what the instructional activity , innovation or program is supposed to accomplish to be considered successful. See also effectiveness.
Exam blueprint - a chart listing each question in an exam and the learning objective, difficulty level, and content topic for each.
Experiment - refers to a variety of research designs that use before-after and/or group comparisons to measure the effect of an instructional activity , innovation or program . See also controlled experiments , field experiments , and single-group experiments.
Experimental group - a group that receives a treatment, stimulus or intervention in an experiment . See alsocontrol group.
Explorative testing – usability testing performed early in product development to assess the effectiveness and usability of a preliminary design or prototype, as well as users’ thought processes and conceptual understanding.
F
Factor analysis - a statistical technique that uses correlations between variables to determine the underlying dimensions (factors) represented by the variables.
Feedback devices - include a variety of formative assessment techniques based upon a learner-centered, context-specific approach to instruction, focusing primarily on qualitative responses from students. Also referred to as Classroom Assessment Techniques (CATs), examples include minute papers, one-sentence summaries, journals, student self-assessments, and narrative reactions to assignments, activities, and exams.
Field experiment - an experimental research design where students are assigned to experimental and control groups in a non-random fashion and instruction occurs in a non-laboratory setting. See also experiment ,controlled experiment , and single-group experiment.
Floor effect - the effect of an intervention is underestimated because the dependent measure artificially restricts how low scores can be.
Focused coding - the second stage of classifying and assigning meaning to pieces of information for data analysis. Coding categories are eliminated, combined, or subdivided, and the researcher identifies repeating ideas and larger underlying themes that connect codes.
Focus group - A focus group consists of a small number (8-12) of relatively similar individuals who provide information during a directed and moderated interactive group discussion. Participants are generally chosen based on their ability to provide specialized knowledge or insight into the issue under study.
Formative evaluation - study conducted during the operation of an instructional program to provide information useful in improving implementation with a focus on instruction.
G
Guided interview - a one-on-one directed conversation with an individual that uses a pre-determined, consistent set of questions but allows for follow-up questions and variation in question wording and order.
H
Human subject - a living individual about whom the investigator conducting research obtains information through intervention or interaction with the individual or identifiable private information or who is the focus of data collection for an assessment, evaluation or research study.
Hypothesis – a predictive statement about what one would expect to find or occur if a theory is correct.
I
Impact - the consequence or effect of an instructional activity , innovation or program.
Independent variable - a manipulated variable in an experiment or study whose presence or degree determines the change in the dependent variable.
Indicators - translate general concepts regarding the instruction, its context, and its expected effects into specificmeasures or variables that can be interpreted. Measurable indicators provide a basis for collecting evidence that is valid and reliable for the intended uses.
Inductive reasoning – a logic model in which general principles are developed from the information gathered.
Informal interview - a one-on-one directed conversation with an individual using a series of improvised questions adapted to the interviewee's personality and priorities and designed to elicit extended responses.
Informed consent - when the researcher provides information to participants as to the general purpose of the study, how their responses will be used, and any possible consequences of participating prior to their involvement in the study. Participants typically sign a form stating that they have been provided with this information and agree to participate in the study.
Initial coding - the first stage in classifying and assigning meaning to pieces of information for data analysis. Numerous codes are generated while reading through responses without concern for the variety of categories.
Institutional Review Board (IRB) - The IRB reviews UT Austin human subject research projects according to three principles: first, minimize the risk to human subjects (beneficence); second, ensure all subjects consent and are fully informed about the research and any risks (autonomy); third, promote equity in human subjects research (justice).
Instruction- any activity or program that supports the interaction between students, faculty and content with the aim of learning.
Instructional activity - the specific steps, strategies and/or actions used in instruction.
Instructional assessment - the systematic examination of a particular aspect of instruction (e.g., content delivery, method, testing approach, technological innovation) to determine its effect and/or how that aspect of instruction can be improved. See also Teaching assessment.
Instructional best practices - general principles, guidelines, and suggestions for good and effective teaching based upon the systematic study of instruction and learning.
Instructional context - refers to the instructional setting and environment (e.g., student demographics, social milieu, fiscal conditions, and organizational relationships) within which the instruction occurs.
Instructional design - the process of analyzing students' needs and learning goals, designing and developing instructional materials
Instructional documents - any printed or electronic materials used in instruction including syllabi, assignments, lecture notes and course evaluation results. Also called course documents.
Instructional evaluation - a holistic examination of an instructional program including the program's environment, client needs, procedures, and instructional outcomes. See also Program evaluation.
Instructional innovation - the transformation of curriculum through the integration of sound pedagogy with new technologies to improve learning.
Instructional objectives - a detailed description that states how an instructor will use an instructional activity ,innovation or program to reach the desired learning objective(s).
Instructional program - a set of policies, procedures, materials and people organized around specific instructional objectives.
Instructional technology - the process of using technology (e.g., multimedia, computers, audiovisual aids) as a tool to improve learning. The application of technology to instruction is optimized when instructors have a basic understanding of various technologies and instructional best practices . Also referred to as "educational technology."
Instructional technology assessment – the systematic examination of how technology impacts teaching and learning.
Instrument - a tool or device (e.g., survey, interview protocol), used for the purpose of assessment or evaluation. See also measure.
Instrumentation effect - a possible limitation within controlled (i.e., pre-test/post-test) experiments. Changes in the test or how the test was administered from pre-test to post-test could affect the results.
Intended uses - ways in which the information generated from an assessment or evaluation will be applied.
Iterative testing - usability testing that is repeated multiple times during different attempts and phases of the product development process.
Internal consistency - a method of establishing the reliability of a questionnaire with a single administration by examining how strongly its questions are related to one another.
Interobserver reliability - similar to interrater reliability. The level of agreement between two or more observers viewing the same activity or setting.
Interpretation - the process of determining what the findings mean and making sense of the evidence gathered.
Interrater reliability - the level of agreement (correlation of scores) between two or more raters who rate the same question, content, survey, etc. The formula for computing interrater reliability is: number of agreements /number of opportunities to agree x 100
Interventions - a product, practice, treatment or variable that can create change commonly tested by an experiment
Interview - a one-on-one directed conversation with an individual using a series of questions designed to elicit extended responses.
J
Judgments - statements concerning the merit, worth, or significance of the instructional activity , innovation orprogram that are formed by comparing the findings and interpretations regarding the instructional measures against one or more standards . Judgments are part of the conclusions step of the research process.
K
L
Learning objective - a detailed description that states the expected change in student/participant learning, how the change will be demonstrated, and the expected level of the change.
Learning outcomes - refers to the knowledge, skill or behavior that is gained by a learner after instruction is completed and may include the acquisition, retention, application, transfer, or adaptability of knowledge and skills.
Linear regression - statistical technique that defines a line that best fits a set of data points and predicts the value of an outcome variable from the values of one or more continuous variables.
M
Maturation - any mental, physical, or emotional change that occurs throughout development (or maturity). This change could affect participants’ performance on the dependent variable of interest.
Measure - 1) an definition of quality or capacity used to assess or evaluate; 2) an instrument used to collect information; 3) the act of collecting information
Measures - instruments, devices, or methods that provides data on the quantity or quality of the independent or dependent variables
Meta-analysis – the process of synthesizing and describing results from a large number of similar studies. 
Method - See research methods.
Mixed method research - research using both qualitative and quantitativedata gathering techniques .
Mixed research combines or mixes quantitative and qualitative research techniques in a single study. Two sub-types of mixed research includes mixed method research— using qualitative and quantitative approaches for different phases of the study— and mixed model research—using quantitative and qualitative approaches within or across phases of the study.
Multiple measures - See triangulation.
N
Narrative description - a written description of an instructional product , instructional document , or classroomobservation . While they may follow a common format or style within a particular project, the focus is on creating an accurate, written detail of the phenomena under study.
Narrative descriptions for usability testing  - test participants in a usability study are asked to describe their experience about a certain task or the overall product via writing or speaking
Non-probability sample - a sample where there is no way to estimate the probability that each member of the population will be included in the sample and there is no guarantee that each member of the population will have the same chance of being included. See also sample.
Non-response bias - a low response rate creates bias towards describing the sample, ignoring those who did not respond.
Normal distribution - when a group of scores is symmetric with more scores concentrated in the middle than at the ends. Also called a bell curve.
O
Observation - refers to the systematic surveillance of classroom instruction with the goal of identifying instructional needs/challenges, describing the instructional activity , innovation or program , or evaluating a change in instructional practice. Also referred to as "classroom observation."
Ongoing Course Assessment (OCA) - web-based survey tool that allows UT-Austin instructors to create instructional assessment instruments to collect anonymous feedback from their students at any time during the semester within a secure environment.
Operational definition - a specific statement about how an event or behavior will be measured to represent the concept under study. See also indicator and research methods.
Outcome - the effect or change resulting from an instructional activity , innovation or program . See alsoperformance-based outcome.
Outcome measure - an instrument, device, or method that provides data on the quantity or quality of the result or product of the experiment; an outcome is the dependent variable of the experiment.
P
Participants - 1) individuals from whom information is being gathered in an assessment or evaluation; 2) the individuals or group under study. Sometimes referred to as "study subjects." See also human subjects .
Pareto chart - a specialized chart useful for non-numeric data that ranks categories from most frequent to least frequent. Bars are arranged in descending order of height from left to right.
Peer review - 1) the process of one student providing feedback to another student as part of the instruction 2) the process of one instructor providing feedback to another instructor about their teaching, usually through observation.
Performance-based outcome - learner outcomes based on standards that are measurable; often demonstrated through products or behaviors.
Population - the largest group under study that includes all individuals meeting the defined characteristics.
Portfolio - collection of documents or products for the purpose of representing capabilities, skill improvement or change over time
Post-task surveys - a survey used in usability testing.  It is given immediately after a certain task is completed in order to gain specific feedback of the task or how participants’ perceptions change over time.
Post-test - a means to measure knowledge or ability after an instructional activity , innovation or program is implemented, using one or more research methods . Also sometimes referred to as a "post-assessment."
Post-test surveys  - a survey used in usability testing that addresses participants’ overall perception of the product such as satisfaction or ease of use
Practical significance - a conclusion determined by an effect size statistic that indicates a research finding is practically important or useful in real life.
Pre-test - a means to measure existing knowledge or ability prior to the implementation of an instructional activityinnovation or program.
Probability sample - when each member of the population has a specified likelihood of being chosen. See alsosample.
Product - any student work designed to demonstrate learning.
Product analysis - assessment of student learning through the examination of student products such as student portfolios, assignments, or writings. The type of analysis used may range from a comparison of student products with pre-determined quality standards , to narrative descriptions , and subjective assessments of quality.
Program - See instructional program.
Program description - a description of the mission and objectives of the instructional program that includes theinstructional activity or innovation being evaluated or assessed, a statement of need , the expected effects , available resources , the program's stage of development, and the instructional context .
Program evaluation -  a holistic examination of an instructional program including the program's environment, client needs, procedures, and instructional outcomes.
Q
Qualitative data - nonnumeric information such as conversation, text, audio, or video. 
Qualitative research follows an inductive research process and involves the collection and analysis of qualitative (i.e., non-numerical) data to search for patterns, themes, and holistic features.
Qualitative research methods - research methods that focus on gathering nonnumeric information using focus groups , interviews , document analysis , and product analysis.
Quantitative data - numeric information including quantities, percentages, and statistics.
Quantitative research follows a deductive research process and involves the collection and analysis of quantitative (i.e., numerical) data to identify statistical relations of variables.
Quantitative research methods - research methods that focus on gathering numeric information, or nonnumeric information that is easily coded into a numeric form, such as a survey.
Quasi-experiment - See field experiment .
Quota sample - a sample created by gathering a predefined number of participants from each of several predetermined categories. The selection process within each category may be random. E.g., dividing a class into groups of males and females and randomly selecting twenty-five participants from each category. See also random sampleand stratified sample.
R
Random assignment - An experimental technique for randomly assigning participants to different treatments or groups.
Random sample - a subset of the population in which every member of the population has an equal likelihood of being selected. See also sample , quota sample , and stratified sample.
Range - measure of dispersion reflecting the difference between the largest and smallest scores in a set of data
Ranking - ordering or sequencing a level of performance compared to a set criterion or how others perform
Rating - a systematic estimation of the degree of some attribute based on a numerical or descriptive continuum
Recommendations - actions for consideration resulting from the assessment or evaluation that go beyond simply forming judgments about the efficacy of an instructional activity , innovation or program . Recommendations may include suggestions for changing existing instructional activities and a plan for ongoing assessment.
Relational analysis – an approach for content analysis where implicit or explicit concepts are identified and the analysis involves exploring the relations between/among concepts within the content.
Reliability - the consistency of a measureinstrument, or observer. An instrument is said to have high test-retest reliability if it yields similar results when given to the same sample at two different times.
Reliable - see Reliability.
Research context - environmental factors that may influence the research process and/or the instructional outcomes under study including geographic location, the physical environment, time of day, social factors, and demographic factors (e.g., age, sex, income).
Research design - a plan outlining how information is to be gathered for an assessment or evaluation that includes identifying the data gathering method(s) , the instruments to be used/created, how the instruments will be administered, and how the information will be organized and analyzed.
Research methods - systematic approaches to gathering information that rely on established processes and procedures drawn from scientific research techniques, particularly those developed in the social and behavioral sciences. Examples include surveys , focus groups , interviews , and observation . Sometimes referred to as "evaluation methods" or "assessment methods."
Research process - the ordered set of activities focused on the systematic collection of information using accepted methods of analysis as a basis for drawing conclusions and making recommendations.
Research question - a question that specifically states what the researcher will attempt to answer.
Resources - refers to the time, human skills and knowledge, technology, data, money, and other assets available to conduct an assessment or evaluation.
Respondent - an individual who participates in the assessment or evaluation process by providing information using an instrument provided by the evaluator. See also participant and human subject.
Response rate - the number of individuals who respond to an instrument divided by the total number of individuals contacted.
Response scale - a set of ordered choices that are provided to answer a survey question or rate an observed behavior.
Rubric - a systematic scoring guideline to evaluate behaviors, documents or performance through the use of detailed performance standards.
S
Sample - a defined subset of the population that is chosen based on its ability to provide information, its representativeness of the population under study, and/or factors related to the feasibility of data gathering such as cost, time, participant accessibility, or other logistical concerns. See also cluster samplenon-probability sample,probability samplerandom samplequota sample , stratified sample and convenience sample.
Self-assessment - the process of judging one's own performance for the purpose of self-improvement.
Self-report instrument - an instrument through which individuals record their own recollections, feelings, judgments and attitudes about an event or phenomenon.
Single-group experiment - a type of experiment where a pre-test and a post-test are used to measure the effect of an instructional activity , innovation , or program , on a single sample with no control group.
Sponsor - the person or organization that has hired or directed the evaluator to undertake a project. The sponsor is always a stakeholder.
Sponsor responsibilities - are the activities in a program evaluation for which the sponsor is expected to assist with or perform. These responsibilities include assisting directly in formulating the research design and creating theevaluation plan , providing access to relevant information such as documents, personnel, or students, and providing information and/or insight about instructional objectives and program operations.
Stakeholders - the individual(s) and organization(s) that will be affected by the results of the assessment or evaluation. Stakeholders may include individuals involved in program operations, those served or affected by the program, and the intended users of the assessment or evaluation. The project sponsor is always a stakeholder.
Stakeholder needs - generally reflect the central questions the stakeholders have about the instructional activityinnovation or program . Determining stakeholder needs helps the researcher to focus the project so that the results are of the greatest utility.
Standard deviation - a measure of the variation between individuals on a variable.
Standards - are measurable criteria that provide the basis for forming judgments concerning the performance of an instructional activity , innovation or program.
Standardized interview - a one-on-one directed conversation with an individual in which the interviewer asks the same questions with the same wording in the same order for all interviewees.
Statement of need - description of the problem or opportunity addressed by the instructional activity ,innovation or program.
Stratified sample - when the population is divided into categorical subgroups (strata) with the sample chosen from each subgroup in proportion to its size in the population. E.g., the population is 60% female and 40% male so a stratified sample would have 60% of its member chosen from the female group and 40% of its members chosen from the male group. The selection process within each subgroup may be random. See also random sample and quota sample.
Statistical power - the ability of an experimental design or inferential statistic to detect an effect of a variable.
Statistical significance - the demonstration that the probability of obtaining a finding by chance alone is relatively low.
Student assessment - the evaluation of student learning through assignments, exams, and portfolios.
Subjective assessment - an assessment of quality where there is no pre-established measure or standard and is thus based solely on the opinion of the evaluator.
Subjects - See participantsrespondents, or human subjects.
Summated scale - a measure that consists of a collection of questions intended to reveal the level of a theoretical variable that are not readily measurable by a single question.
Summative evaluation - a study conducted at the end of an instructional program or program cycle to provide decision makers or potential consumers with judgments about the program's merit with a focus on making decisions about program continuation, termination, expansion or adoption
Survey - an ordered series of questions about attitudes, behaviors or personal characteristics administered to individuals in a systematic manner.
Synchronous learning - an on-line communication tool, instructor-to-student or student-to-student, that occurs at the same time but not necessarily in the same place; similar to electronic "chat"
Synthesis - combining data and information from multiple sources, or of ratings and judgments on separate scoring dimensions in order to arrive at a result or conclusion.
Systematic sampling – when the sample is selected from the population at a regular/systematic interval (e.g., every 5th participant from a subject pool is selected).
T
t-test – a data analysis procedure that assesses whether the means of two groups are statistically different from each other
Teaching assessment - the systematic examination of a particular aspect of instruction (e.g., content delivery, method, testing approach, technological innovation) to determine its effect and/or how that aspect of instruction can be improved.
Technology Delivered Instruction (TDI) - instruction presented or facilitated through the use of instructional technology
Test monitor– An individual who directly observes and records data from an usability test
Test-retest reliability - a method of assessing the reliability of a questionnaire by administering the same or parallel form of a test repeatedly.
Testing effect A possible limitation within controlled (i.e., pre-test/post-test) experiments. Participants perform better the second time they are measured using a same or similar test (i.e., post-test measure) because of practice, familiarity, or awareness
Theory Into Practice (TIP) - when theory is applied to a real-world instructional context.
Top-down planning - a planning approach that begins with a broad theme and systematically reduces the theme level by level into categories. Also called deductive reasoning. 
Triangulation - using multiple research methods to gather information or multiple sources of information on one topic or research question usually with the intent of improving reliability and/or validity . Sometimes referred to as using "multiple measures."
U
Usability - the ease of use, learnability, efficiency, and error tolerability of a particular product.
V
Validated scale - a collection of questions intended to identify and quantify constructs based on educational theory that are not readily observable such as knowledge, abilities, attitudes, or personality traits.
Validation - the process of gathering evidence to provide a scientific basis for proposed score interpretations from ameasure or an instrument.
Validity - the degree to which the theory and information gathered support the proposed uses and interpretations of ameasure or an instrument.