ҚОҒАМДЫҚ ЖӘНЕ ГУМАНИТАРЛЫҚ ҒЫЛЫМДАР
ОБщЕСТВЕННЫЕ И ГУМАНИТАРНЫЕ НАУКИ
SOCIAL SCIENCES AND HUMANITIES
UDC 811.111:37.10
YU. AKSUTENKO
S. Amanzholov East-Kazakhstan State University, Ust-Kamenogorsk, Kazakhstan
CREATiNG PROGRAM EvALUATiON MODEL
AND LANGUAGE ASSESSMENT
in the article possibilities of modeling in educational programs assessment are con-
sidered. Language assessment is presented as a measurement instrument in foreign language
program evaluation.
Keywords: educational programs, assessment, language education programs, text-
books.
ТілдіК БіліМ БеРУ БАҒдАРлАМАлАРыНың
МОделЬдіК БАҒАлАУы
Мақалада оқушылардың басқа тiлдiк дағдылары мен ептiлiктерін бағалау
мүмкiндiктерi қарастырылған. Бағалау шет тiлi бойынша оқыту бағдарламасын бағалау
құралы ретiнде берiлген.
түйін сөздер: бағдарлама, бағалау, тілдік бағдарламалар, оқулықтар.
МОделИРОВАНИе ОЦеНКИ
ЯЗыКОВыХ ОБРАЗОВАТелЬНыХ ПРОГРАММ
В данной статье рассматриваются возможности построения модели оценки об-
разовательных программ. Оценка выступает в роли инструмента оценивания образова-
тельных программ по иностранному языку.
Ключевые слова: образовательные программы, оценивание, языковые образова-
тельные программы, учебники.
The term “model” comes from the Latin word «modulus», which means “mea-
sure”. Analysis of the literature (v.i. Andreev, v. Krajewski, J.K. Babansky, v.P. Be-
spalko, N.v. Kuzmin) shows that modeling is one of the most common methods of
research, a necessary step in their system, but the success of its application involves
202
Тоқсанына бір рет шығарылады
Шығыстың аймақтық хабаршысы
the use of the entire set of research methods.
Model of language assessment and program evaluation (see Picture 1) reflects
their structure and composition, criteria and indicators. To simulate the object under
study we took into account the components and features of this type of activity.
The term “evaluation” has often been taken to mean the assessment of students
at the end of a course, but in recent years its meaning has widened to include all
aspects of a programme. There is a distinction between assessment and evaluation:
assessment in the curriculum is a process of determining and passing judgements on
students' learning potential and performance; evaluation means assembling evidence
on and making judgements about the curriculum including the processes of planning,
designing, and implementing it.
From this perspective, evaluation can relate to courses and learners in a number
of ways. it can try to judge the course as it is planned (see Picture 1), for example, in
terms of the appropriateness of the textbook content to the students or the coherence of
a teacher's scheme of work. it can try to observe, describe, and assess what is actually
happening in classrooms as a course progresses. it can test what learners have learned
from a course. A. Tatur calls these three aspects of evaluation 'the planned curriculum',
'the implemented curriculum', and 'the assessed curriculum' (see Picture 1). Evaluation
can help to see the complex relationship among these three. For example, it has been
acknowledged by second language acquisition researchers for some time that there
is no easy one-to-one relationship between teaching and learning, and that a teacher
cannot set out learning objectives for a class and expect learners to achieve these
uniformly by the end of rhc lesson. it is more the case that the teacher makes content
available to learners, who work on it in different ways and at different rates and with
differing degrees of uptake. Thus, if evaluation of a course is undertaken only by
means of end-of-term student assessment, this procedure will give just part of rhe
picture, the full picture am only be seen if a wider set of evaluation procedures are
employed such as talking to teachers and snideries, checking teachers' work schemes,
and observing classes. These procedures can also shed light on those other aspects of
classroom learning which have been called the 'hidden curriculum', for example, the
shaping of learners' perceptions and am Hides towards other peoples and cultures by
the teachers choice of materials.
Using the definition of evaluation, judgements are to be made about the curriculum
at an institutional level, then information needs to be collected horn a variety of
'stakeholders' (see Picture 1), i.e. those interested in its effectiveness, be they learners,
teachers, and educational managers, or author ities, parents, governors, sponsors, and
funding agencies. And a range of procedures will be needed for the collection of data.
SOCiAL SCiENCES AND HUMANiTiES
203
Региональный вестник Востока
Выпускается ежеквартально
We also need to acknowledge the sensitivity surrounding evaluation: who
undertakes it, how comment is kept confidential, how the information is analysed, and
how it is used. Too much or badly managed evaluation can create suspicion, hostility
or evaluation fatigue. These issues become particularly difficult during periods of
retrenchment in education when posts and funding may be at risk.
Brown J., [1] Weir C. and Roberts J. [2] have all provided checklists which
are useful for course evaluation in ELT departments and curriculum review at the
nstitutional level. However, even at the level of the individual teacher interested in
improving the quality of a course, a rational approach is necessary. We will now look
at the key questions considering the Model structure.
An important distinction here is between evaluation tor accountability and
evaluation for development. The first may well involve decisions about whether a
course will be repeated, whether a textbook will be dropped, or whether a particular
resource such as a listening laboratory has been used sufficienrJy' ro warrant further
investment in self-access listening materials. This purpose of evaluation makes staff
and/or institutional answerable to authorities and/or sponsors, it also makes publishers
and texthook writers accountable to teachers and teachers accountable to their students.
it often takes place at the end of a programme and, when undertaken by an institution,
it may be carried out by an external cvaluator. in contrast, developmental evaluation
aims at improvement: it often takes place during a course so that feedback can facilitate
immediate improvement to the current programme as well as to future programmes. As
feedback can enlighten both teachers and managers about the strengths and weaknesses
of course design and professional practice, this kind ot evaluation can usefully involve
both in co operative procedures which aim ar improving quality of work, the point has
been made repeatedly in management literature [2]
that “healthy’ institutions are ones
which have regular procedures lor reviewing their work and openness of discussion
about ways to effect improvements. Course review can therefore be most usefully
perceived as a regular activity with agreed criteria and procedures, and ensuing action
plans.
One aspect of the agreed procedures which is in the Model too is who carries
out course evaluation. The choice of the evaluator chosen to evaluate the program may
be regarded as equally important as the process of the evaluation. Evaluators may be
internal (persons associated with the program to be executed) or external (persons not
associated with any part of the execution/implementation of the program). There is a
brief summary of the advantages and disadvantages of internal and external evaluators
adapted. internal evaluators has the following advantages: - may have better overall
knowledge of the program and possess informal knowledge of the program; - less
yU. AKSUTENKO. 1 (65) 2015. Р. 201-210
iSSN 1683-1667
204
Тоқсанына бір рет шығарылады
Шығыстың аймақтық хабаршысы
threatening as already familiar with staff; - less costly.
Disadvantages are: - may be less objective; - may be more preocuppied with
other activities of the program and not give the evaluation complete attention; - may
not be adequately trained as an evaluator. External evaluators have the following
advantages: - more objective of the process, offers new perspectives, different angles
to observe and critique the process; - may be able to dedicate greater amount of time
and attention to the evaluation; - may have greater expertise and evaluation brain. But
there are some disadvantages too: - may be more costly and require more time for the
contract, monitoring, negotiations etc.; - may be unfamiliar with program staff and
create anxiety about being evaluated; - may be unfamiliar with organization policies,
certain constraints affecting the program.
So, if undertaken by a head of department or director of studies, evaluation
and assesment may well be seen as staff appraisal and regarded by teachers as
threatening. Since evaluation for development depends on the willingness of teachers
to acknowledge their concerns and problems, a major task for managers will be to
avoid suspicion, and to create an ethos of openness, mutual respect, and mist. For this
reason, many universities prefer procedures which involve teachers in evaluating their
own work and in drawing on institutional resources in order to improve what they see
as their areas of weakness.
The precise method of evaluation will relate to what exactly a teacher or course
director wants to assess. For example, in order to get feedback from students on the
interest-level of textbook content, a simple rating scale from 1 to which students
can score against each topic, might be appropriate. However, in order to assess the
usefulness of a listening laboratory, the teacher might want to set up a log book in the
laboratory with individual sheets (or students to complete after each session, recording
the work they have done and perceptions of their progress with it). Table 1 lists some
of those aspects which lecturers might want to investigate and questions they might
want to address, the precise choice of focus and range will depend on the age and level
of students, the nature of course objectives and content, and whether there are recent
innovations to be evaluated.
in evaluating our own courses we can use a variety of procedures (see the Model
– methods). One simple method often used is to head a set of poster-sized sheers with
key issues, lor example:
– What i have learned from this course
– What i liked most abour the course
– What i liked least about the course
– How i think the course could be improved.
SOCiAL SCiENCES AND HUMANiTiES
205
Региональный вестник Востока
Выпускается ежеквартально
Picture 1 – Language Assessment and Program Evaluation Model
yU. AKSUTENKO. 1 (65) 2015. Р. 201-210
iSSN 1683-1667
206
Тоқсанына бір рет шығарылады
Шығыстың аймақтық хабаршысы
Table 1 – Aspects of a course to investigate
Course aspects
Questions to address
Student needs
What were the students' priority needs and to what
extent has die course fulfilled them?
Have students become aware ot iunhcr needs?
Course content
To what extent have different content areas of the
course been useful?
What further topics, situations, etc. would students like to cover?
What has been the interest-level of particular texts, discussions, etc.?
Resources
What do students think are the strengths and
weaknesses of the textbooks used?
To what extent have students used other resources available?
Have students used community resources?
Methodology
What aspects of methodology do students like/dislike,
find useful, interesting, etc?
Do students feel that the pace ot classes is appropriate?
Could students be more involved in choosing texts and designing tasks?
Teaching
strategies
Are there any activities the teacher feels uncomfortable with?
Does the teacher perceive any weaknesses in teaching techniques or
classroom management?
Learning
strategies
Are there areas in which student training is needed?
How do students help themselves to learn outside the classroom?
What are students' perceptions of the most useful lands of homework?
Assessment
Have progress tests related effectively to course
objectives and course content?
Has the amount of assessment been adequate and well-timed?
Have students gained a clear idea of their progress and been counselled
on it?
Have students had opportunities to assess themselves?
Have adequate records been kept?
The educator can organize the procedure, appoint a chair, and withdraw from
the room to ensure openness of discussion and anonymity of comment. Students then
agree and list comments on the posters, and a tally is kept of how many students agree
with each comment. The lecturer, on returning, can discuss the comments and the
feasibility of various ways to improve quality. The clear advantages of this procedure
are its simplicity, speed, openness of comment, and opportunity for discussion.
in contrast, the administration of a questionnaire survey, a popular method, is
time-consuming in preparation and processing, and, it set for completion our of class,
SOCiAL SCiENCES AND HUMANiTiES
207
Региональный вестник Востока
Выпускается ежеквартально
responses may be difficult to chase up. However, the advantage of this method is
that the teacher can ask about points of special concern and can ensure coverage of
many course elements. One way of engaging students' interest is to ask them to submit
questions or prepare parts of the questionnaire in groups as classwork.
The poster session and the questionnaire survey are procedures for gathering
feedback from students. Other methods of evaluating courses, indicated in the Model
(see Picture 1), involve observation, review of documents, and teacher self-report.
Table 2 summarizes what is available to the lecturer. Diary-keeping, for example, has
become popular in many western contexts where this activity is part of the cultural
tradition, but it would need careful consideration in some other contexts and is, in any
event, a matter ot personal taste and preference.
Table 2 – Methods of evaluating courses
Method
Examples
Student
feedback
-interview students in groups or individually.
-Ask students to complete questionnaires in class or at home.
-Ask students to wrire key comments on posters.
-Hold an informal discussion.
-Ask students to make evaluative notes individually on the weeks classes to
give to the teacher.
Teacher self-
report
-Fill in a self-assessment sheet.
-Keep a log book or diary.
Observation -Make an audio/video recording of group work in a class and analyse the
extent to which what happened is what you planned or expected.
-Observe one studeni through a week's classes and analyse interest, attention,
strategics, strengths, and weaknesses.
-Ask a colleague to watch a lesson and observe a particular aspect ot your
teaching, e.g. explanations, controlled practice, vocabulary work. Ask for
critical comment.
Documents -Review course objectives.
-Review lesson plans and write evaluative comments. Look for points to
improve on.
-Review student work and pinpoint issues to work on.
-Ask students to keep diaries of what they like / dislike, find easy / hard,
interesting / uninteresting, and review these periodically.
Two kinds of evaluation can be used in a course. The first is summative assessment
at the end of a course, a useful point at w hich to review the whole course in order
ro pinpoint elements for improvement. The second is formative assessment which
takes place as the course proceeds. ideally, evaluation should be planned from the
beginning, a schedule set, participants decided upon, and criteria and procedures agreed
yU. AKSUTENKO. 1 (65) 2015. Р. 201-210
iSSN 1683-1667
208
Тоқсанына бір рет шығарылады
Шығыстың аймақтық хабаршысы
by all involved. For example, a tcacher may decide to elicit feedback from students
by means of a poster session midway through a course and respond immediately on
points arising; or a director of studies may co-ordinate the design and administration
ol a questionnaire survey at the end of a term, the responses to which may be used for
further course planning.
if ELT departments are to remain 'healthy', then the information collected needs
to be led into a review process which goes beyond the individual teacher and links
to wider decisions within the institution regarding time tabling, choice of learning
materials, development of resources, organization ol the general curriculum, and
provision of in-service training. Consideration, therefore, needs to be given to these
questions:
– How is the information to be collared? How is it to be analysed?
– To whom is it to be disseminated (e.g. teachers, managers, sponsors,
students)?
– How is it to be disseminated (e.g. written report, verbal report at a staff
meeting}?
– When is it to be discussed?
What action plans might arise from the discussion?
–
if we hold informal poster sessions with our classes, these questions are easily
dealt with. We can return to the class, posters can be displayed and points discussed,
we can talk about what is feasible among suggested im provements, and make
undertakings. Our students can also make undertakings in this discussion and a date
for further feedback can be set. The teacher can then contribute information from the
evaluation to department discussion.
in the case of a questionnaire survey across a range of courses, the director
of studies can collate information statistically and present a short report to a staff
meeting, and the ensuing discussion can generate action points to be allocated among
staff. Such a procedure needs sensitive management, ownership of the data by all
involved, open discussion, and a focus on issues arising rather than on individual
teachers. Only then can evaluation become a force for improving quality within an
institution. As for the Paradigms in program evaluation Lynch B.K. [3] identifies and
describes three broad paradigms within program evaluation. The first, and probably
most common, is the positivist approach, in which evaluation can only occur where
there are “objective”, observable and measurable aspects of a program, requiring
predominantly quantitative evidence. The positivist approach includes evaluation
dimensions such as needs assessment, assessment of program theory, assessment of
program process, impact assessment and efficiency assessment.
A detailed example of
the positivist approach is a study conducted by the Public Policy institute of California
report titled “Evaluating Academic Programs in California’s Community Colleges”, in
SOCiAL SCiENCES AND HUMANiTiES
209
Региональный вестник Востока
Выпускается ежеквартально
which the evaluators examine measurable activities (i.e. enrollment data) and conduct
quantitive assessments like factor analysis. The second paradigm identified by An-
drade, H.G. is that of interpretive approaches, where it is argued that it is essential
that the evaluator develops an understanding of the perspective, experiences and ex-
pectations of all stakeholders. This would lead to a better understanding of the various
meanings and needs held by stakeholders, which is crucial before one is able to make
judgments about the merit or value of a program. The evaluator’s contact with the
program is often over an extended period of time and, although there is no standard-
ized method, observation, interviews and focus groups are commonly used. A report
commissioned by the World Bank details 8 approaches in which qualitative and quan-
titative methods can be integrated and perhaps yield insights not achievable through
only one method. Birkholz C. and Wessel J. also identifie critical-emancipatory
approaches to program evaluation, which are largely based on action research for the
purposes of social transformation. This type of approach is much more ideological
and often includes a greater degree of social activism on the part of the evaluator. This
approach would be appropriate for qualitative and participative evaluations. Because
of its critical focus on societal power structures and its emphasis on participation and
empowerment, Birkholz C. argues this type of evaluation can be particularly useful
in developing countries.
Despite the paradigm which is used in any program evaluation, whether it be
positivist, interpretive or critical-emancipatory, it is essential to acknowledge that
evaluation takes place in specific socio-political contexts. Evaluation does not exist
in a vacuum and all evaluations, whether they are aware of it or not, are influenced
by socio-political factors. it is important to recognize the evaluations and the findings
which result from this kind of evaluation process can be used in favour or against
particular ideological, social and political agendas. This is especially true in an age
when resources are limited and there is competition between organizations for certain
projects to be prioritised over others.
The last but not the least point to discuss is Government Requirements. As the
administration moved to apply an “evidence-based approach” to government spend-
ing, including rigorous methods of program evaluation. An inter-agency group deliv-
ers the goal of increasing transparency and accountability by creating effective evalua-
tion networks and drawing on best practices [3].
A six-step framework for conducting
evaluation of public programs initially increased the emphasis on program evaluation
of government programs. The framework is as follows: - engage stakeholders; - de-
scribe the program; - focus the evaluation; - gather credible evidence; - justify conclu-
sions; - ensure use and share lessons learned.
Thus, into our Model of language assessment and program evaluation we in-
cluded range of procedures, stakeholders, results and their utilization, paradigms and
yU. AKSUTENKO. 1 (65) 2015. Р. 201-210
iSSN 1683-1667
210
Тоқсанына бір рет шығарылады
Шығыстың аймақтық хабаршысы
government requirements.
REFERENCES
1. Bailey K.M. & Brown J.D., Language testing courses What are they? In A. Cunning
& R. Berwick eds., Clevedon, UK. Multilingual Matters, Validation in Language Testing, 236-
256, 1996 (in Eng).
2. Cumming A., What is a second-language program evaluation? The Canadian Mod-
ern Language Review, 43, 4, 678-700, 1987 (in Eng).
3. Milleret M., Evaluation and the summer language program abroad A review essay.
The Modern Language Journal, 74, 4, 483-488. 1990 (in Eng).
ӘОЖ 343.23(574)
Достарыңызбен бөлісу: |