TOEIC® : Tried but undertestedby Mark Chapman (Hokkaido University)
|"Why has ETS produced 23 times more research reports on the TOEFL than on the TOEIC?"|
[ p. 2 ]ETS may have seen little benefit in pursuing the same degree of research into TOEIC that had already been conducted into TOEFL. Moreover, ETS' experience of extensively publishing research into their own test with the TOEFL may have caused them to question whether this process invites skepticism and further critical investigation. Shohamy (2001, p. 148) reports that "there is low trust on the part of the public with regard to research conducted by companies that also develop and market tests, in a similar way that there is research conducted by profit-making drug companies on the drugs they produce." Whether or not this was a factor that prompted ETS to avoid publishing extensive in-house research on the TOEIC is, of course, a matter of speculation.
[ p. 3 ]Despite the TOEIC now being in use for almost 25 years it has not changed at all. It is still based on the structuralist, behaviorist model of language learning and testing that informed discrete-point testing. If ETS has accepted this model is no longer suitable as a basis for the TOEFL, why has TOEIC not been treated similarly? Surely the lack of critical research is a major factor along with the lack of an effective feedback mechanism from end user (corporations) to test maker. TOEIC cannot have been ignored by ETS due to its minority status: more people take the TOEIC every year now than the TOEFL. In 2002 more than 2.8 million individuals registered to take the TOEIC in more than 60 countries worldwide (ETS, 2003). This is more than twice the number that took TOEFL in the same time period. Given this importance in business terms of the TOEIC to ETS, it is perhaps even more surprising that there is no indication of TOEIC receiving the same degree of research attention devoted to the TOEFL.
|"TOEIC cannot have been ignored by ETS due to its minority status: more people take the TOEIC every year now than the TOEFL."|
[ p. 4 ]Three reports have provided data that conflict with ETS research. Childs (1995) is very critical of the TOEIC. His independent data suggests that the reliability estimates provided by ETS are overstated. He also concluded that the standard error of TOEIC scores is greater than the published ETS figure, making TOEIC scores less reliable as a measure of individual progress as score gains tend to be within the test's SEM. Hirai (2002) also expressed doubts about the ability of TOEIC to predict individual oral and written English proficiency. In a study conducted with employees of a major Japanese company, he suggested that the TOEIC was especially unreliable as a predictor of spoken English for individuals with intermediate range TOEIC scores (approximately 450 - 650). Hirai found that TOEIC scores had a low correlation (around 0.5) with BULATS scores, a test of writing in a business context. Finally, an unpublished MA dissertation (Cunningham, 2002) reported that the TOEIC was a very poor predictor of communicative competence and was not at all suitable for measuring gains in communicative performance. He used a self-design test battery, and while the research should not be entirely discounted, the fact that the TOEIC was not compared to an established test needs to be borne in mind.
[ p. 5 ]
ReferencesBoldt, R. F., & Ross, S. (1998). Scores on the TOEIC® (Test of English for International Communication) test as a function of training time and type. Princeton, NJ: Educational Testing Service.
[ p. 6 ]Eggly, S., Musial, J., & Smulowitz, J. (1998). The relationship between English language proficiency and success as a medical resident. English for Specific Purposes, 18 (2), 201-208.