Tuesday, June 5, 2007

Evaluation Techniques

Evaluation techniques


Behavior Observation Checklist: a list of behaviors or actions among participants being observed. A tally is kept for each behavior or action observed.

Knowledge Tests: information about what a person already knows or has learned.

Opinion Surveys: an assessment of how a person or group feels about a particular issue.

Performance tests: testing the ability to perform or master a particular skill.

Delphi Technique: a method of survey research that requires surveying the same group of respondents repeatedly on the same issue in order to reach a consensus.

Card-sorting/Q-sorts: a rank order procedure for sorting groups of objects. Participants sort cards that represent a particular topic into different piles that represent points along a continuum.

Self-Assessment: a method used by participants to rank their own performance, knowledge, or attitudes.

Questionnaire: a group of questions that people respond to verbally or in writing.

Time Series: measuring a single variable consistently over time, i.e. daily, weekly, monthly, annually.

Case Studies: experiences and characteristics of selected persons involved with a project.

Individual Interviews: individual’s responses, opinions, and views.

Group Interviews: small groups’ responses, opinions, and views.

Wear and Tear: measuring the apparent wear or accumulation on physical objects, such as a display or exhibit.

Physical Evidence: residues or other physical by-products are observed.

Panels, Hearings: opinions and ideas.

Records: information from records, files, or receipts.

Logs, Journals: a person’s behavior and reactions recorded as a narrative.

Simulations: a person’s behavior in simulated settings.

Advisory, Advocate Teams: ideas and viewpoints of selected persons.

Judicial Review: evidence about activities is weighed and assessed by a jury of professionals.

Draw a picture

Tell a story

Portfolio assessment

Appreciative inquiry

Open reflective evaluation sheet: This asks almost no questions, but encourages the participant to reflect on an activity and write down their thoughts, observations and impressions

A round of "I" statements: Each participant is asked to say out loud, to the rest of the group, how they thought the activity went. It usually helps to make 'I' statements, and to make them positive, rather than negative. For example, "I thought the … went well. It really mode me think" seems more helpful than "I didn't like the …"

Round Robin: Round Robin evaluation is a method of eliciting, collating and rating every course participant's most positive and negative comments about the course, as well as their suggestions for improvement or further topics.

Snowball Review: A snowball review is a group-based evaluation method which takes participants through a number of formal steps during which their opinions and comments are elicited, shared, reviewed and compiled into a final list of strengths and weaknesses of the course.

What went well and why: This method, which can be called WWP, is a group based evaluation method where the participants reflect on an educational event and, as a group, decide what went well, what went less well, decide why and then plan how things could be done better next time. It is particularly useful when group "process" is being explored.

Digital Strategy: There are numerous approaches to conducting digital strategy, but at their core, all go through three stages: identifying the key opportunities and/or challenges in a business where online assets can provide a solution; identifying the unmet needs and goals of the customers that most closely align with those key business opportunities and/or challenges;[1] and developing a vision around how the online assets will fulfill those business and customer needs, goals, opportunities and challenges [2] and prioritizing a set of online initiatives which can deliver on this vision. Within each of those stages, a number of techniques and analyses may be employed.

Participant observation

Kiddie Focus Groups: Wells developed “kiddie focus groups” to help design a nature center to serve children (M. Wells, personal communication, September 13, 2005). As a formative evaluation, Wells took children on field trips to a variety of nature centers, parks, and zoos. The children were given surveys to evaluate how interesting they found each site. After visiting several sites, the children came together with Wells to discuss what was good, bad, and otherwise about those sites. The children gave their ideas on the development of the new site, and ultimately a set of interpretive principles was developed to guide the design of the new site. This evaluator successfully adapted the focus group strategy to her audience by making questions simple and explaining the purpose and intent of the evaluation to the children.
The use of kiddie focus groups addressed the challenge of multiple goals in nonformal education programs by obtaining input directly from those affected by the program to target the most valued goals. Similarly, kiddie focus groups could be framed around predetermined indicators of quality.

Ink-blot test

Post-it® Surveys: Judy Machen (n.d.) of the Bradbury Museum in Los Alamos, New Mexico, developed a technique called Post-it® Surveys. She initially designed this data collection method to evaluate participants’ understanding of the scientific content of an exhibit. Machen placed a large easel in the museum lobby. At the top of the white paper was a question about the exhibit’s content. Located near the easel were pens and sticky notes for participants to use to respond to the evaluation question. Machen found that participants were very interested in using the sticky notes to respond to the question, and soon participants also began to respond to each other’s sticky notes as well. Machen was able to compile the information from her surveys to bring back to the scientists and designers of the exhibit. They then redesigned the exhibit according to the participants’ feedback. The use of sticky note surveys can address challenges associated with the drop-in nature of nonformal education programs, programs with multiple goals, and programs that serve individuals participating in multiple programs to meet similar learning needs. For example, by having the sticky note surveys available at all times, nonformal educators can collect data from a number of participants at any time, even those who participate sporadically or only once. In addition, to address the challenge of multiple goals, questions can ask participants to write statements that reflect their top three learning goals and whether these goals were met. To address the challenge of individuals who access several programs for similar learning goals, questions could ask how this particular program differs from other nonformal education programs they attend. The convenience of the survey board allows collecting data on a wide range of carefully worded questions to inform decision making about ideas, content, presentation, and so on (Wells and Butler, 2004).

Naive Notions: Borun (1990) of the Franklin Institute Science Museum in Philadelphia developed a technique called Naive Notions. In the context of her work in museum settings, Borun recognized that visitors often had misperceptions about gravity and that they were bringing these perceptions to their understanding of exhibits on gravity. She wanted to
uncover people’s naive notions and develop exhibits that would resolve these misunderstandings. To do so, she conducted front-end interviews to determine preexisting notions about gravity, designed a series of mock-up exhibits, interviewed visitors following their viewing of the mock-ups, and modified the exhibits on the basis of these results. Follow-up interviews demonstrated that visitors’ misconceptions surrounding gravity decreased
significantly. This clarifying exercise should enable the evaluation to focus on the most important goals.

Archival Data: Wells and Butler (2004), used archival data, including guest books, gift shop purchases linked to postal codes, and donation boxes as unobtrusive means for collecting data.
These records are often created for other purposes but can provide a wealth of information to evaluators. For example, by examining the postal codes of people making gift shop purchases, the evaluator can often determine where visitors live and can analyze purchasers in the context of a variety of demographic indicators, including income level, educational attainment, and proportion of rental properties versus owner-occupied units. A spatial analysis using the postal codes can inform program staff whether they are reaching their target audiences, the extent to which certain demographic groups are either over- or underrepresented, and so forth. The use of archival data may address challenges inherent in drop-in programs, by providing baseline and
demographic data.

Talk aloud: Another creative approach to data collection is a talk aloud. Similar to think-aloud evaluations done for usability testing, talk alouds ask participants to say what they see or what they are thinking as they encounter an exhibit or experience a component of a nonformal education program. M. Wells (personal communication, September 13, 2005) has used
talk alouds with museum participants, and she suggests that this technique can be used for both formative and summative evaluations. A participant walks through an exhibit with an evaluator, who asks the participant to talk aloud about what he is seeing as well as what reactions he has as he makes sense of the exhibit. This technique elicits participants’ subjective views of the exhibits that yield good insights, especially helpful at the formative stage of the evaluation. It can be particularly useful for programs that are pursuing multiple goals and those with a range of variables to define quality. By guiding the talk aloud, for example, the evaluator can prompt the participant to talk about the goals that the nonformal education program is seeking to
address or to discuss the agreed-on quality variables. The interactive nature of the talk aloud provides opportunities to observe initial subjective reactions as well as to encourage the responder to elaborate for deeper understanding.




REFERENCES

http://www3.interscience.wiley.com/cgi-bin/fulltext/112492113/PDFSTART

http://ohioline.osu.edu/b868/pdf/b868.pdf

CREATIVE DATA COLLECTION IN NONFORMAL SETTINGS 77
NEW DIRECTIONS FOR EVALUATION • DOI 10.1002/ev

No comments: