Multilingual Testing in Monolingual Regimes

Shyam Sharma

Shyam Sharma

A few months ago, while attending a talk on campus (at Stony Brook University) one afternoon, given by Elana Sohamy, an Israeli scholar, I had a moment of despair.

The title of her talk was “multilingual testing” and the backdrop of her presentation was the monolingual regime of language testing and its effects on multilingual language users across the world.

As teachers of language and writing/communication, we keep saying in theory that language learners take 3-5 or even 9-11 years to be fluent and accurate in a new language, depending on where and how they learn. But in practice, we continue to resort, very quickly and thoughtlessly, to the logic of pragmatism, of institutional policy, of the need to make sure that our multilingual students can perform in English.

Here’s the problem with our dead habits and thoughtless embracing of the current monolingual testing regime: Usually, monolingual tests don’t predict overall academic performance by multilingual students, simply because the tests confuse language proficiency as a predictor of a lot more than just language proficiency that academic transition and success involve. It would be nice if, for instance, a TOEFL score of x predicted whether a graduate student is able to listen, read, speak, and write well enough to be a successful graduate student in, say, a graduate program in ecology. But it doesn’t.

Two students with the same TOEFL score (even with similar academic records from the same education system) regularly perform significantly differently, and often, students with much lower TOEFL scores perform better than with much higher scores. Why? Because academic performance is a matter of process involving a lot of factors–grit, support, psychology, personality, content knowledge, and a lot more.

While TOEFL is not as bad as GRE, I must add, the company selling this product back-peddled a few years ago when it ignored that adding the spoken test (as it is administered by using an underdeveloped technology of speech and accent recognition) makes the test even less valid. TOEFL’s test of writing skills is as much of a joke as SAT writing test is: it rewards certain semantic, syntactic, and rhetorical tricks that do not constitute good writing in college/university. And while its listening and reading passages are based on quite authentic classroom situations, the validity of the overall score is significantly hampered by the outdated view of language proficiency that the test is based on. Those who make these tests are still unable to understand how multilingual English speakers around the world perform sociolinguistically in academic and other contexts.

I heard today that ETS is starting to respond to the basic idea that multilingual English speakers should be assessed in terms of how they draw on more than one language to achieve communicative goals. But the company’s primary goal of efficiency for itself trumps the principles of validity and effectiveness. And this is to say nothing about the blatant lie about universality of the content on which the tests are based, the egregious amount of money the test fee translates from US dollars to some of the local currencies, the intellectual insult for students with learning disabilities, and many more issues.

Sohamy showed that immigrant students scoring in the 60th percentile when tested in L2 only were able to score in the upper 80s when the questions were provided in both L1 and L2. This means that when educators allow learners to start academically succeeding while still having some “issues” with their language, they learn English much more effectively in the process. This is a no brainer: when I start a semester, I tell my nonnative English speaking students in class that they “shouldn’t worry about [their] language” and instead “focus on doing the research, coming prepared to actively participate in class, drawing on [their] prior knowledge, being excited about learning and sharing ideas.” To teachers, it is really a no brainer: It is possible not to put the cart of language learning in front of the horse of the process of education. It is possible not to follow the backward logic of ETS — of treating language as something that you learn “before” you join the learning party!

Gatekeeping is necessary but it doesn’t need to be so outdated and invalid. Assessment is necessary but it doesn’t need to be decoupled from learning and teaching.

Now, some readers may object: “I have to make sure that students I admit have a certain level of language proficiency.” Well, there are two significant problems in that “pragmatic” stance. First, someone who scored well on the TOEFL may have been linguistically privileged and proficient, but there is no guarantee that he/she is academically capable or committed. TOEFL doesn’t measure subject knowledge, and, again, it doesn’t measure grit. Second, one could say that admission officers look at academic transcripts in order to review the applicant’s academic caliber. Guess what? Academic transcripts from different countries (and even from different academic systems from the same country) cannot be compared–which is one of the reasons people turn to TOEFL in the first place. Back to square fifteen.

We’re confusing pedagogy with policy, process with desirable proficiency, outcome with entry level proficiency, and our own bias with the need to rely on a system that we admit is flawed but seemingly without good alternatives. What if start by thinking outside of these easy frameworks? What if we start by embracing what Elana Sohamy called “critical language testing”–testing based on skepticism toward established regime that are not in the business of fairness and sophisticated thinking. What if we can adopt parallel regimes, including ad hoc approaches, that we can use in order to challenge ourselves?

What if we can ask all the ten students whom we want to admit to our doctoral program to call us on Skype, Hangout, Viber, or Facebook phone and have a twenty minute conversation each, in order to use our own conscientious judgment, instead of a TOEFL score?

What if?

 Shyam Sharma is, PhD in rhetoric and composition from University of Louisville, is assistant professor of writing and rhetoric at State University of New York, Stony Brook.  

Testing the Testing System of Nepal: An Interactive Article

Choutari Editors

Testing is inevitable although not desirable. It is necessary in order to keep the track of overall progress of language teaching programme. Debates have been going on for and against the testing. However, the important point to note here is that it is the faulty process of testing that is being criticized not the concept of testing itself. In fact, such criticism is necessary as it can help improve the system. The sphere of language testing in Nepal is also not free from criticism. Therefore, we decided to test the testing system of Nepal in this interactive article. We have attempted to explore the existing problems in the field of language testing and possible solutions to them after an interaction with experts and readers. We believe such interactive can play a significant role to reform the system. A thematic question was asked to language experts as well as Choutari readers. The question was ‘What is a major problem in language testing system of Nepal and what can be the solution to it?’ Among the responses collected, we have presented the opinions of eight respondents here:

Shyam Sharma:
There are many problems with current language testing regime (as well as some good things). One issue that’s come up in our conversations is how testing practices typically ignore multilingual competencies. At first, this may seem like an impossible ideal, but if you look deeper, the question becomes why not. Ours is a multilingual society and students’ language proficiencies are not isolated; their English is a part of a complex sociolinguistic tapestry; their other languages don’t “hamper” English; languages aren’t just mediums but rich epistemological resources; and, humans have always spoken multiple languages without seeking a monolingual standard. So, when we face the task of teaching and testing students’ English abilities in isolation, we shouldn’t act like helpless slaves of the system; when discussing the roots and stems and branches and bitter fruits of the current regimes, there’s no need to surrender to the “reality.” The reality includes politics, power, and possibilities beyond their grips, and thus, we must broaden the base of our discussions so we can see testing as a broader phenomenon than, well, testing. Scholarly conversations under the tree here can and should help the community rethink the fundamentals.

Shyam Sharma is an Assistant Professor in the Program in Writing and Rhetoric at Stony Brook University (State University of New York)

Prem Phyak:
I call it an ‘issue’ rather than a ‘problem’; why do we still ‘test’ monolingual ability (although our students have bi-/multilingual ability)? Another issue embedded within this issue is: How can we test students’ multilingual ability? First, we must be clear that ‘testing’ is not a ‘fixing-shop’ where you can fix a ‘problem’ rather it is a complex discipline which needs a critical scrutiny from multiple perspectives for a valid evaluation of students’ ability. Our assumption that ‘language testing’ should only test ‘monolingual ability’, meaning that multilingual testing is impossible, is the major challenge for reforms in language testing. This dominant assumption decontextualizes language testing from students’ cultural, linguistic and educational contexts. So, the major issue is: our tests are not context-sensitive. For example, I still remember that we were often asked to write an essay in SLC (School Leaving Certificate) exam about different highways in Nepal but I had never seen any highways (when I was in school). We were asked to memorize their lengths, construction dates and so on. I could not even conceptualize what a ‘highway’ was. However, I could write more and better when I had to write about ‘my village’ or ‘my school’.

The issue of contextualization is closely associated with testing multilingual abilities; locally-contextualized test items require students to work with their abilities in more than one language. For example, when I had to write an essay about my village I used to think in Limbu, Nepali and English. I (and my friends) could not think about the topic in only one language – no separation of languages! But the tests did not allow me to use my Limbu and Nepali abilities while writing essays in English. This is the major issue, right? If language tests are meant to test ‘language ability’, why don’t we test students’ functional abilities in multiple languages? This applies to Nepali language tests as well. For example, when students speak Nepali they simultaneously use English as well (and/or other local languages if their first language is other than Nepali); one cannot create the fixed boundary of a language. Suppose a bilingual student writes “आजको class मा कस्तो frustrate भएको…” (I had frustration in today’s class) for her Nepali essay (it can be more complex than this in the case of Maithili and Newari children, for example), how do we evaluate her Nepali language ability? The first reaction could be ‘असुद्द” (incorrect –literally impure). However, she is expressing her views fluently by using both Nepali and English in her repertoire. She cannot separate one language from another. This means that monolingual tests do not test students’ bilingual or multilingual abilities. Unfortunately, the students who show their bi-/multilingual abilities in language tests are considered ‘deficient’ and ‘poor’. However, the above example represents the use of language in the real-life (authentic) context.

There are ways to test multilingual abilities. For example, an inquiry-based formative assessment, which engages students in doing research and working with teachers to receive qualitative feedback on their work, can be one way to help them fully utilize their multilingual abilities. Such assessments encourage students to translanguage (use multiple languages to perform different tasks) to achieve the goals as specified by the test criteria. However, any kind of so-called ‘standardized test’, which are guided by the monolingual assumption, cannot test bi-/multilingual abilities. We should say a big ‘NO’ to the standardized tests if we truly believe in developing equitable language testing.

Prem Phyak is an MA (TESOL), Institute of Education, University of London, UK, M.Ed., Tribhuvan University, Nepal

Tirth Raj Khaniya:
Lack of professionalism is the main problem of English Language Testing in the context of Nepal. Professionalism is known as ability of applying fairness, ethics and standards in exam related issues. While dealing with exam related matters we need to be fair. We assume that we are professional but in reality we are not professional thus the test is not testing what it is supposed to test.
In language testing for teachers’ to be professional they require both necessary skills and abilities and application of those skills and abilities in a proper manner. To maintain professionalism it is necessary to have wide discussion among teachers and therefore all those who are involved in exams will have clear understanding.

Tirth Raj Khaniya has a Ph. D. in Language Testing from University of Edinburgh, UK. Currently, a Professor of English Education, he teaches language testing in the Department of English Education, TU.

Ganga Ram Gautam:
The main problem of language testing in Nepal is that the test itself is faulty. It does not test the language skills but test the memory of the text materials given in the textbook. There are also other several problems that include the issues with the test writers, test item construction, test administration and validation of the tests.

One solution of this problem could be to develop standardized tests and administer them in the various key stages such as primary level, lower secondary level and secondary level. In order to do this, we need to train a team of experts to develop the test and the test should be standardized by going through the reliability and validity testing. Once the tests are developed, they should be administered in a proper way so that the real language proficiency of the students can be obtained.

Ganga Ram Gautam is an Associate Professor at Mahendra Ratna Campus, Tribhuvan University and former president of NELTA.

Laxman Gnawali:
There is no need to reiterate that the aim of the learning a foreign language is to be able to communicate in it. In order to find out whether English language learners in the Nepalese schools have developed communicative skills in this foreign language, there is a provision for the testing of listening and speaking at the SLC level. I feel that this test is not serving the purpose. The lowest marks students get in speaking is 10 out of 15, which is 66%. However, when we communicate with the SLC graduates (let alone who fail the examination), most of them perform very poorly. There are two reasons for this inflated marking: the speaking test includes predictable questions for which the responses can be rehearsed: personal introduction, picture description and one function-based question (which is repeated so often that students can prepare a limited set of responses and be ready of the test). Secondly, there is a kind of extreme leniency in the examiners; they just award marks irrespective of the quality if the responses.

Two interventions could improve the situation. Firstly, the examiners should be trained to ask very simple everyday realistic questions which students cannot respond without knowing the language. Secondly, each test should be video recorded so that inflated marks can be easily scrutinised. Administrative issues should not come in the way of quality testing which has far-reaching consequences.

Laxman Gnawali is an Associate Professor at Kathmandu University and Former Senior Vice President of NELTA

Laxmi Prasad Ojha:
I think we are giving too much priority to examinations and tests in our education system. We do not understand the purpose of testing and evaluation. We don’t test the comprehension and understanding of students. This is the main cause of the failure of our education system in many cases, including the language teaching programmes.

Uttam Gaulee:
I think “formative” should be the key word here. Laxmi ji, pointed out an important bottleneck we have experienced due to lack of purpose of testing and evaluation. If we think of a typical Nepali school, we do give more importance on summative tests than the formative ones. What we seriously lack (and that’s why we have a tremendous opportunity to work on) is systematic feedback for student.

Uttam Gaulee is Graduate Research Fellow, University of Florida College of Education, Gainesville, Florida

Bal Krishna Sharma:
Yeah, one way would be to introduce and practice more formative type of assessment. This will evaluate and test students’ ongoing progress and learning outcomes.

Ph.D. student, University of Hawaii at Manoa

Although the issue was one, the thematic question unbelievably raised so many genuine issues. The respondents highlighted the issue of testing multilingual competencies apart from only testing monolingual ability and also suggested some ideas on how to test students’ multilingual abilities. In the same way, the interaction raised the issue of lack of professionalism in language testing. Similarly, the respondents also urged that our memory-driven testing system itself is faulty. Furthermore, there is problem in test construction and administration and suggestion is put forward to develop and practise standarized tests to minimize the problems. In relation to the problem in testing listening and speaking in SLC exam, it emphasized that the test items are predictable and examiners are lenient and award marks irrespective of quality. The solution proposed is to train the examiners properly and introduce the system of video recording students’ performance. On the other hand, overemphasizing exams and not testing what it should test is characterized as a problem. The solution discussed over such problem is to give more importance to formative test rather than summative test, which helps keep the track of students’ achievement.

Now the floor is open for you. Share what you think is the problem of testing system in our context and what can be the solution. We believe such interaction contributes in the development of innovative ideas in ELT.

Teaching and Testing Listening at Secondary Level

Bhupal Sin Bista 

This article discusses the techniques and activities used by the secondary level English teachers while teaching listening. It also sheds lights on the gap between the teachers’ theoretical knowledge and its use in classroom teaching, including situation of testing listening in Nepalese schools. For this, data were collected from different secondary level English teachers teaching in Kathmandu district, which helped explore the techniques and activities employed by the teachers in classrooms.

For many years, listening skills did not receive priority in language teaching. Teaching methods emphasized productive skills, and the relationship between receptive and productive skills was poorly understood. (Richards & Renandya, 2010). In fact, among the four language skills, listening is the primary language skill. A child becomes able to speak after getting enough exposure to the language through listening. Researches show that congenitally deaf children are unable to acquire language though they are given enough exposure. Therefore, listening is the most important skill of all.

In the context of our country, teaching listening has been highly focused in the present English curriculum of secondary level. As the curriculum is based on the communicative approach, teaching listening is a must for the development of communicative competence in students. Taking into account the worth of listening skill, it is taught in the secondary schools of both the community and institutional (private) schools. However, only teaching is not enough. There should the use of right appropriate methodology. In this context, it is imperative to explore whether listening skill is taught as it should have been or not. Furthermore, it is equally important to find out the gap between the teachers’ theoretical knowledge of teaching listening and its application in the classroom. Thus, the research was carried out in order to find out the reality.

Teaching Listening
In this study, teaching listening refers to teaching listening comprehension. Listening is an activity of paying attention to and trying to get meaning from something we hear (Underwood, 1989, p. 1). It involves understanding a speaker’s accent and pronunciation, his grammar and vocabulary and grasping his meaning. For successful communication, listening skill is essential, so it should be taught to students. In order to teach listening comprehension effectively, the teacher should be clear about the skill to be developed in students. According to Rivers (1978, p. 142), before the teacher can devise a sequence of activities which will train students in listening comprehension, he must understand the nature of the skill he is setting out to develop. Field in Richards & Renandya (2010, pp. 242-247) examines a commonly used format for teaching listening, one which involves three stages in a listening activity: pre-listening, listening and post-listening.

Listening skill should be taught properly to the students at school. Instead of leaving it to be developed as part of a pupil’s general education training, it is to be taught explicitly to them. Students spend half of the classroom time in listening, so it should be developed properly. In this context, Hron (1985, as cited in Rost, 1994, p. 118) suggests that listening should be developed in all school children since it is a vital means of learning that may be as important as reading. In order to teach listening properly and effectively, appropriate approaches should be used. Without employing the appropriate approaches, listening skills cannot be taught well. For the development of appropriate approaches to teaching listening skills, it is essential to understand the nature of listening. In this regard, Nunan in Richards and Renandya (2010) mentions two types of models: the bottom-up and the top-down processing model. The bottom-up processing model asserts that listening is a process of understanding meaning from phonemes to complete texts. The top down processing model, on the other hand, views that listening is a process of understanding meaning on the basis of the listener’s shared or prior knowledge. In this way, these models are to be taken into account while teaching listening.

Stages of Teaching Listening
There are generally three stages of teaching listening, viz. pre-listening, while- listening and post-listening stages. They are also known as listening techniques.

1. The Pre-listening Stage
This is the first stage of teaching listening. At this stage, students are given some background information about the audio. Indeed, this is the preparatory phase of teaching listening in which students are prepared and motivated for listening and performing the tasks. Following Underwood (1989, p. 3), it consists of several activities like giving background information, looking picture, topic discussion, question answer, etc.
2. The While-listening Stage
In this stage, the students listen to audio, perform the activities and do the tasks based on the listening comprehension. This is the actual listening stage whereby students are asked to do exercises based on the audio. The main purpose of this stage is to help the students develop the skill of eliciting messages from spoken language.

3. The Post-listening Stage
This is the final stage where follow¬-up activities are done. As its name implies, post-listening stage embraces all the activities related to a particular listening activity which are done after the listening is completed. In a way, this stage is the extension of the activities done at pre-and while-listening stages. Problem- solving and decision-making activities, interpreting activities, role-play activities, written work, etc. can be exploited at this stage.

Testing Listening
Listening is one of the crucial language skills. Therefore, like other skills it should be taught and tested properly and regularly. While testing listening, different aspects of language should be tested. These generally encompass grammatical knowledge, discourse knowledge, pragmatic knowledge, sociolinguistic knowledge, etc. In this way, testers have to test the different aspects of listening skill. Listening perception and listening comprehension skills are to be taken into account while testing listening skills. In this regard, Buck (2010, p. 105) says that test developers would choose the following aspects of language competence which met the requirements of their test.

  • Knowledge of the sound system
  • Understanding local linguistic meanings
  • Understanding full linguistic meanings
  • Understanding inferred meanings
  • Communicative listening ability

Listening Skill in English Curriculum of Secondary Level
The present English curriculum of secondary level is based on the communicative approach to language teaching. It has incorporated four language skills and language functions in its content. Listening skill is also focused in the curriculum. In the examination, 10% of the total marks is allocated to listening skill. For the development of listening skill in students, there is a provision of listening lesson in each unit of the textbook. For the purpose of teaching listening, Curriculum Development Centre (CDC) has developed audio cassettes for classes nine and ten. The curriculum has mentioned the following objectives of teaching listening:

  • Listen to spoken text, understand the gist and retrieve specific information from it.
  • Record in note or make summary from the main points of spoken messages.
  • Respond appropriately to spoken directions or instructions.

Teaching and Testing Listening at Secondary Level
The secondary level curriculum of Nepal aims at developing the communicative competence in students. That is to say, it is based on the communicative approach to language teaching. The teachers teaching at this level, therefore, are expected to teach the listening skill in accordance with the objectives of the curriculum. This skill should be equally focused as other skills of language teaching but in reality, this skill has been neglected. During the research, it was found that listening skill was not taught in the secondary level of Nepal. However, some schools were found giving importance to teaching this skill. The researcher visited different schools and used different tools for the purpose of research. From the theoretical point of view, almost all the teachers were found having sound knowledge of teaching listening. However, this knowledge was not used in the actual classrooms.

Teaching and testing should go simultaneously. Teaching listening should be fostered by testing as testing is an integral part of teaching. Whatever is taught in the classroom should be tested, for the items neglected in the testing are generally neglected in the teaching as well. Therefore, for the effective teaching of listening, it should be tested seriously in examination. However, in the secondary schools of Nepal, listening skill is not tested properly. There is the provision of testing in the curriculum. However, its implementation is very poor. Testing listening has merely become a matter of formality. It is not tested properly even in School Leaving Certificate (SLC) board examination. Listening is also one of the crucial components of language learning. Without having detailed knowledge of listening, the learners cannot achieve sound communicative competence.

The major findings of the study are as follows:

  • The majority of the teachers were found using only two stages of teaching listening (i.e. pre- and while- listening).
  • Although the majority of the teachers were found having sound knowledge of teaching listening, they were found not employing their theoretical knowledge in the classroom teaching. Only a few teachers were found using this knowledge while teaching listening in the classroom. Thus, there is a vast gap between the teachers’ theoretical knowledge of teaching listening and its application in the classroom teaching.
  • No teachers were found conducting listening as the course prescribes. Most of the teachers were found not conducting listening consistently: sometimes they conducted listening once a week and sometimes even once a month.
  • The majority of the teachers were found not giving priority to listening skills. They did not take listening as one of the most important language skills.
  • Listening skill was not tested properly in the secondary schools of Nepal.

The following recommendations are made on the basis of the findings of the research:
• All the teachers who are teaching listening at secondary schools should use at least the following listening techniques and activities.
a. Pre-listening stage
– Giving background information of the listening text
– Picture discussion
– Discussion on the topic and/or situation
– Reviewing the areas of grammar
– Simplifying the meaning of difficult words given in the text.
b. While-listening stage
– Short answer questions
– True/false items
– Fill in the blank items
– Multiple choice items
c. Post-listening stage
– Writing or presenting the summary
– Parallel writing
– Dictation
• Most of the teachers were found having sound theoretical knowledge of teaching listening. However, they did not apply the knowledge in the classroom, Therefore, seminars and workshops should be organized to refresh and enhance the skills of the teachers, especially, NCED should develop special training package for teaching listening.
• Most of the teachers were found neglecting and not giving priority to listening in their teaching. Therefore, it should be made an important part of examination and tested properly. As a whole, teachers and other stakeholders should be made aware of the importance of listening via conferences, seminars, workshops, etc.
• Secondary level English course prescribes listening lesson in each unit. However, teachers were found not teaching listening according to the course. Thus, teachers should be encouraged to use enough listening materials in order to give sufficient exposure to students.
• Some schools were found not having appropriate listening materials. Thus, the concerned authority should make the mandatory provision of managing required materials in every school.

Buck, G. (2010). Assessing listening. Cambridge: CUP.
Cross, D. (1992). A practical handbook of language teaching. London: Prentice Hall International Limited.
Crystal, D. (1994). An encyclopedic dictionary of language and languages. Harmondsworth: Penguin.
Harmer, J. (2007). The practice of English language teaching. Edinburgh Gate: Pearson Education Limited.
Harmer, J. (2008). How to teach English. Edinburgh Gate: Pearson Education Limited.
Lynch, T. (2007). Study listening. London: CUP.
Richards, J.C. and Renandya, W.A. (Eds.). (2010). Methodology in language teaching Cambridge: CUP.
Rivers, W. M. (1978). Teaching foreign language skills. London: University of Chicago Press.
Rost, M. C. (1994). Introducing listening. Harmondsworth : Penguin English.
Underwood, M. (1989). Teaching listening. London : Longman.
Ur, P. (2010). Teaching listening comprehension. Cambridge : CUP.

Bhupal Sin Bista
Master in English Education,
Mahendra Ratna Campus,
Tahachal, Kathmandu

Classroom Assessment: A New Era in Language Testing or An Additional exercise?

Presented By: Ashok Raj Khati and Manita Karki

Language testing cannot be separated from the changing understanding of the nature of language, language abilities, and language teaching and learning. Accordingly, what is to be tested in language teaching has drastically been changing in recent times as a result of changes in what is to be taught. In this regard, we have entered a new era in language testing, which is classroom assessment also termed as performance assessment.

In recent years, there has been a growing discussion on whether classroom testing should replace other tests. In this essay, we suggest that it should work as a supplement to paper and pencil tests. The method may not be capable of replacing established methods of testing but there are a number of benefits that make classroom-based language testing more genuine and better attuned to effective language teaching and learning by today’s standards.

Let us begin with the central role of teacher in classroom assessment through this real story.

In an award giving ceremony to School Leaving Certificate (SLC) graduates, a teacher stepped forward and asked a particular student whom he had taught for years, “How did you get the first division, you deserve the second division.” Though the student passed SLC and got certificate of the first division, the teacher remarked so confidently that he should not have got first division.

It indicates the fact that teacher spend long time with his/her students and are able to evaluate them more or less rightly. In many countries, a teacher is the authority. If a student is unable to sit in the final examination because of certain reasons; the teacher has a right to recommend grades or percentages to examination board based on the students’ internal/classroom assessment and the board accepts it. Doing that makes teacher fair and ethical. However, there are many other contexts where teachers have not gained this sort of credibility. The point is it is the teacher who can best judge his or her students and it the classroom tests which allows teachers to do so. Therefore, classroom assessment is accepted as being close to what we are struggling for a long time.

Secondly, in most of the cases, we make a machine type of judgement when we test students through paper and pencil beyond the class but it is a human mind or brain that is involved in making judgement on classroom assessment. It used to be believed that everything can be tested by using a paper and pencil test but now people have started asking how? There are things that we want to test which cannot be tested by paper and pencil based test. The answer to this question is classroom testing. There are so many things that we can do in classroom which cannot be done through a paper and pencil test. We cannot test all types of abilities and skills by paper and pencil test because of expertise, time, and other limitations, but classroom assessment is genuine and it is worth implementing.

Class room assessment or performance assessment is genuine because one cannot test people’s actual language ability while they are not actually performing an act by using it. It is the classroom that allows learners to perform. In this regard, classroom assessment captures genuineness. Many scholars have realized that paper and pencil test, whether it is based on communicative approach or something else, cannot authentically test students’ performance. Especially a large-scale test cannot be a performance based test. There were classroom tests after 2010 but those tests were used for internal assessment. Classroom tests are different, they are bound to be different and they are of different designs.

Classroom assessment is collaborative in nature. When students obtains marks in board examination, one common thing they cannot figure out is on what basis was their answers marked and consequently, they think they are given less and what they deserve. However, in classroom assessment, teacher works with students before, during, and after the assessment. The present of students makes teacher cautious and transparent. Thus, the teacher makes judgement of the students in a collaborative manner. Further, teachers can also assess students’ performance by assigning group work that makes classroom testing different from large-scale assessment. That adds one more dimension to collaboration.

The best thing about classroom testing is that it is learning focused. As a result, it has positive wash back. It mainly focuses process and less product. Teachers get enough opportunities to observe the different learning processes of their students in classroom assessment. By contrast, paper and pencil test may not be able to create situations and offer adequate opportunities to demonstrate different abilities and skills, and perform certain tasks on the part of students. It is more product-oriented. It is only classroom test that can make learners perform tasks while being tested.

In the same way, classroom assessment is a social phenomenon. The classroom is a society. A school is run for teaching and learning but at the same time, we mange it in a way that that would be the representation of the society. Thus, the classroom assessment is a social phenomenon where we promote classroom assessment and students learn and practise performance based activities, which they will continue to practise outside the classroom.

In terms of creativity, classroom testing is not an entirely new approach because in some way prior approaches also tried to capture what this approach tries to do. A good example of this is how Bloom’s Taxonomy captured a range of simple to complex competencies. It is very difficult to capture the psychological processing of learners in many occasions. We have to be tentative to assess it. Testing cannot be a science; it is different from many other activities. The focus of language testing is: what is the content of the language, where is it, how do we get hold of it? Scholars who are advocating for communicative testing have now realized that what they were trying to accomplish with it is something different. Icons of language testing has different views on communicative testing. Some say that it is not necessary to test communicative abilities through communicative approach. After 2010, testing has moved into assessment, assessment has moved into performance, and testing tends to be always indirect unless one asks students to perform certain tasks. It is not the test that test; it is the tasks that test. It may be hard to determine whether or not classroom testing can entirely replace communicative testing. However, classroom-based testing can be a focus of testing because it is very close to reality since teachers will be asking students those tasks in the classroom which they are supposed to do outside the classroom in their real lives.

While talking about classroom-based testing communicative testing, there may arise a question of construct. The construct is the basic characteristics of activities of an event, the psychological and the philosophical aspects of skills and abilities, and the quality of the content. The construct in communicative language testing may be assessed in an indirect way by bringing language performance into the classroom and assessing it. The concept of communicative language teaching and testing in a real sense has been changing. Henry Widdowson, one of the prominent scholars in the field of Applied Linguistics, wrote a book in 1979, “Teaching English as Communication”. Once in 2000, he said that he if he were to revisit that book, he would call it “Teaching English for Communication”. He realized that it is not possible to teach English as communication. He was excited to talk about communication in 1980s but later he found that it was not easy to capture communicative activities and bring them into the classroom and make it happen. In some ways, it has to be indirect, less communicative and difficult to bring communication in the classroom.

In a way, the philosophy behind the communicative language teaching (CLT) is the continuity of what we have been doing for the last 70 years. Somehow, CLT is also based on a paper and pencil test. At the end of the day, teachers give test to students to perform where they may not authentically perform language use. Based on the change from CLT to language teaching and testing, teachers and scholars began to realize that classroom assessment should be an additional learning exercise. Therefore, a genuine assessment must be a performance assessment and an inherent part of the whole process and that is the next era of language testing. It does not mean that communicative language testing has nothing to do with language teaching and testing in the days to come. We are still using 1960s’ multiple choice items. All previous methods of language testing have made lots of contributions to language testing but we are moving toward something new. Communicative approach in testing will also continue because it has strengths and potentialities but at the same time, the thrust of classroom assessment needs to lead classroom teaching and learning activities.
In sum, classroom assessment is an important approach to language testing. It appears to be very close to what we have been trying to find out. It may take time to make a strong ground to be a prominent approach. So for now, classroom assessment is an additional option- not a replacement. It will contribute to make assessment more authentic and better attuned to current understanding of language learning. It will be a good instrument for us to improve teaching and testing in the classroom.

(The piece is based on a lecture delivered by Prof. Dr. Tirth Raj Khaniya at the School of Education, Kathmandu University)

Ashok Raj Khati

M. Phil
Kathmandu University

Manita Karki

M. Phil
Kathmandu University