AsiaTEFL Logo        The Journal of AsiaTEFL
   
The Journal of AsiaTEFL
Articles By Subject
Current Issue
Past Issues
Special Issue
Information of the Journal
Editorial Board
Submission Guidelines
Ethical Guidelines
Manuscript Submission
Journal Order
User Information
Search
Today 208
Total 401,679
Past Issues
Go List

Volume 9 Number 2, Summer 2012, Pages 1-148   


PDF Download
   

From the Editor-in-Chief

    Bernard Spolsky


I start with a brief account of the papers in this issue, and a comment on how they exemplify the remarks I will make afterwards. They are not intended as criticisms of the articles themselves, for it is important to stress that a published paper has survived a very rigorous peer-review system and has usually been revised to meet the suggestions of the reviewers, but rather to raise a new problem that has recently been noted in publications in science as a whole, our bias for positive results.
The first paper by Afsa Rouhi and Hassan Mohebbi of Payame Noor University in Iran compares the effects of glosses in the student's language and in English on the learning of English vocabulary. Forty-four pre-university students read three sections of the assigned reading; one group was shown glosses in Persian, another glosses in English, and a third no glosses. The students were tested for learning of new vocabulary in three ways, once immediately after the seventh session and once two weeks later. Those students who were shown glosses scored significantly better than the control group, but there was no significant difference between glosses in Persian and in English. The perhaps obvious conclusion is that it helps to learn new vocabulary if you are told the meaning of the words. Thus a positive result was achieved, although the question of which method was better, or whether there might be another method were left unanswered.
In the next, Stephen Evans and Bruce Morrison of the Hong Kong Polytechnic University report on a three-year series of interviews with a small group of students which confirmed that those coming from English-medium schools found study easier than those from Chinese-medium schools and suggested that teachers did not allow for the differing English proficiency of their students.
The third paper, by Timohiti Hiromori of Meiji University, Hiroyuko Matsumoto of Hokkai-Gakeun University, and Akira Nakayama of Ehime University (all in Japan), set out to study the differences between successful and unsuccessful readers of English. It studied 234 students from different majors and at different proficiency levels studying English once a week in classes that included teaching strategies. The students reported motivation, strategies, and beliefs, and took reading achievement test. Data were collected at the middle and at the end of a 14-week course. Cluster analysis produced four groups at the first test, two with high proficiency and motivation and two with low. The post-test suggested that the four groups progressed differently, with differences between them decreasing and no significant difference in achievement appearing. One of the difficulties I have in interpretation is that the students' answers to questionnaires are taken to be the same as the characteristic named, whether belief or motivation or strategy. I also note the lack of a positive improvement in proficiency, suggesting that a 15-week course one hour a week does not lead to noticeable progress. But this negative result is not highlighted.
The fourth paper by Ye Han of Shantou University in China asks about the effects of written corrections on student writing. A class of 18 students (six were dropped in analysis because of missing classes or other problems) was studied: one group received corrections, a second had errors underlined, and a third was untreated: the corrections all involved the past tense form. Those who received corrections showed short-term improvement, and there was some suggestion that the noting of errors did show some long-term effects. But the smallness of the sample makes it difficult to generalize.
In the next paper, Le Duc Manh of Hai Phong University, Vietnam, raises questions about the proposal to teach some university subjects in English. He reviews some of the attempts to implement such a policy elsewhere, and then analyzes in some detail the issues that will need to be resolved in Vietnam, given the current low standard of English. Such a major change will need major resources and careful implementation.
The last paper, by Rod Pederson of Woosong University, sets out to analyze and expand the notion of situated learning. It reviews the development of the concept and shows how it relates to current theories of second language learning and teaching, and call for a modified theory.
Leaving out the last two papers which are theory and policy rather than research oriented, the first four fit the common pattern for published research in our field, reporting an experimental study for which the writer does his best to provide a positive finding. It is hard to find papers that show negative results, except in smaller studies such as many of our contributors are forced into by the absence of funding support in our field. In fact, two things happen: in one, we report that the result was not statistically significant, but still claim that it tends to support our positive view. But if the results are not statistically significant, we have no right to make this claim, and editors should be stricter in demanding that wishful thinking be not included. The second possibility is that the paper will not be published, either because we self-censor and decided not to report our failures to achieve positive results, or else because the editorial review process decides a failure is not worth publishing.
The result, as has recently been discussed in the journal Nature, is that publications are biased, reporting only positive results and ignoring the many cases where research shows that the treatment makes no difference. This happens in all fields, especially in studies of the effect of drugs and medicines that will be sold; the equivalent in our field is probably in experimental use of materials or methods we hope to propagate. One of the unfortunate effects is that people lose confidence in published research, which turns out not to work when we try it for ourselves. There is a major difference apparent in our field compared to hard science, for the first step in hard science is to attempt to replicate previous studies: lack of replication needs to be explained before going to the next step. We take an easier route - we summarize previous studies but assume they would work on our populations and situations.
We all should worry about this, and our editors and reviewers should perhaps consider taking a harder look at the papers we read, expecting as a general rule careful replication of earlier work, and statistically significant results whether positive or negative. A friend of mine, a distinguished scientist, once described the examination he gave his advanced students. He gave them details of a study with results, and asked them to write up their conclusions. Halfway through the examination, he presented a contradictory set of results, and asked them to explain them. A good study, he argued, could have useful conclusions from positive and negative results. We need to accept the value of negative results, the usefulness of replication failures, and the duty to consider more carefully how our published research contributes to progress in the field.



Jerusalem, May 2012
Bernard Spolsky,
Editor-in-chief and Asia TEFL Publications Executive Director