AsiaTEFL Logo        The Journal of Asia TEFL
   
The Journal of Asia TEFL
Articles By Subject
Current Issue
Past Issues
Special Issue
Information of the Journal
Editorial Board
Submission Guidelines
Ethical Guidelines
Manuscript Submission
Journal Order
Search
Today 864
Total 3,774,626
Current Issue
Go List

Volume 15 Number 4, Winter 2018, Pages 900-1238   


 http://dx.doi.org/10.18823/asiatefl.2018.15.4.10.1036 PDF Download
   

Different Rating Behaviors between New and Experienced NESTs When Evaluating Korean English Learners' Speaking

    Kyoung Rang Lee


This study aimed to explore variations within native-English-speaking-teachers (NESTs) when evaluating English learners' speaking in terms of their teaching period in one institute (new and experienced) and to compare the results with their writing evaluation behaviors. Using the same methodology to the writing study, the speaking materials were carefully prepared and the speaking delivery performance like pronunciation were excluded to be compatible with the writing evaluation. While the experienced NESTs evaluated Korean English learners' essays more severely than the new NESTs in contrast to the previous studies, there was no difference in speaking. When evaluating the learners' English writing, the experienced group demonstrated similarities to non-native-English-speaking-teachers (NNESTs) in contrast to the new NESTs. When evaluating speaking, the new ones were more consistent in regarding content more substantially while the experienced ones showed a different rating behavior from writing. Lastly, unlike the previous studies, but like the writing study in comparison, the NESTs' perceived-difficulty played a more important role in grading than their perceived-importance regarding grammar, content, and vocabulary. The details of the results were elaborated and the discussions with implications were provided. This study also suggests a new approach to examine rater biases in relation to English learners' expressive skills.

Keywords: NESTs and NNESTs, rater biases, speaking evaluation, new raters, experienced raters