| || |Technology-based assessment in special education has made advances during the last two decades. Whereas the first applications of || |computer technology for assessment were for scoring student test forms, contemporary uses support many other features and functions. || |These features include self-administration, software control of item presentation, response evaluation based on conceptual models or || |algorithms, decision making based on rules and criteria, prescription based on expert knowledge, and direct links between assessment || |and changes in instruction. || |Technology-based assessment generally refers to the use of electronic systems and software to assess and evaluate the progress of || |individual children in educational settings. Thus it encompasses both electronic versions of traditional measurement protocols as || |well as innovative assessment approaches that employ computers. || |Examples of approaches in technology-based assessment include: * A video-based computer-assisted test able to learn the language || |preference of the student and automatically switch to it to increase the validity of its measurement; * Video segments from popular || |movies used as elements of a moral dilemma in a real-life, problem-solving test; and * Students viewing video segments of peers || |interacting in various social situations and entering their responses by simply touching a computer screen.
|| |In the world of technological evaluation these innovative approaches bring validity and relevancy to the testing procedure. || |A variety of factors have contributed to the current need for better assessment tools and procedures in our schools. Students??™ are || |misplaced as a result of poor evaluations is one Misplacements of students can have devastating results for both student and || |teacher. Such misplaced students tend to lose interest and drop out of school. Teachers who have misplaced students in their || |classrooms are often not properly trained to deal with certain behaviours or learning differences. || |Other factors contributing to changes in assessment practices are partially due to the growing population who qualify for IDEA || |(Individuals with Disabilities Education Act).
Data reported in the 1994 U.S. Dept. of Educations Sixteenth Annual Report to || |Congress on the Implementation of Individuals With Disabilities Act show a 3.7% overall increase in students receiving special || |education services during the 1992/93 school year. This growth in special education, although not huge, does include a significant|| |number of minorities who have different cultural backgrounds and languages. The growing diversity of students in special education || |warrants new and innovative approaches to assessment practices. Minorities who qualify for special services require culturally || |relevant assessments, particularly in the areas of language and social behaviors.
|| |* Culturally Relevant Assessment || |It is important that minorities be assured of equal representation in special programs. It is well known that ethnic minorities are || |under-represented in advanced programs for the gifted, such as AEP (Advanced Education Program). Yet, there is an over-representation|| |of certain ethnic minority groups that fall into special education categories such as Intellectually Disabled and Learning Disabled. || |Studies have shown, for example, that until recently there were approximately twice as many Mexican-American students in classes for || |the educable mentally retarded (EMR) in the U.S. as would be expected on the basis of proportion in the school population.[3,4] As || |far back as 1916, Miller pointed out that special or “backward” classes furnished “an easy means of disposing of (a || |non-English-speaking) pupil who, through no fault of his own, is an unsatisfactory member of a regular grade.” || |This over-representation of ethnic minority students in classes for the mentally retarded has been attributed to the indiscriminate || |use of psychological tests, especially IQ tests, combined with the linguistic and cultural orientation of school programs.
 It may || |also be due to human factors in evaluation regarding personal biases and prejudices, as well as a lack of adequate tools to || |accurately assess without cultural relevancy. || |Native languages have been used in individual evaluations, but much of the information is lost during the translation from English to|| |the students, dominant language. This is possibly due to the different cultural dialects and historical backgrounds between || |evaluators and students. Also, students may perform very well on certain items in their native language, but many items within the || |testing tools are likely to be either culturally unfamiliar or not even existent in their native language. These students end up with|| |low scores and poor overall results. || |New technology, however, makes it possible for evaluations to include all items in a students native language and also to bring in a|| |broader range of culturally relevant items. || |* Expanding Definitions of Learning || |Another reason for change in our current assessment practices is that we, as educators, know more .
.. |Several labels have been used to describe alternatives to standardized tests. The most common include “direct assessment,” “authentic assessment,” “performance assessment,” and the more generic “alternative assessment,” which I shall use here.(4) Although these descriptors reflect subtle distinctions in emphasis, the several types of assessment all exhibit two central features: first, all are viewed as alternatives to traditional multiple-choice, standardized achievement tests; second, all refer to direct examination of student performance on significant tasks that are relevant to life outside of school.Proponents of alternative assessment prefer it to more traditional assessment that relies on indirect, “proxy” tasks (usually test items).
Sampling tiny snippets of student behavior, they point out, does not provide insight into how students would perform on truly “worthy” intellectual tasks. Conversely, they argue that student learning can be better assessed by examining and judging a students actual (or simulated) performance on significant, relevant tasks. As Jay McTighe and Steven Ferrara note, such assessment can focus on students processes (revealed through learning logs, “think-aloud” observation sessions, self-assessment checklists); products (e.g.
, diaries, writing portfolios, art portfolios or exhibits); or performances (e.g., typing tests, dramatic or musical performances, oral debates).(5)Relevant assessment is that how learners would perform on their examination tasks.
The learner learning can be better assessed by examining and judging a students direct examination of student performance on significant tasks that are relevant to life outside of school.