Thursday, November 15, 2007

Continuing on the ELF/World Englishes debate

The two articles (Ellis, 2006, and Seildhofer, 2001) stimulated our discussion on the perspectives and issues on error correction in ELF and World Englishes framework. I think we each had a lot to say, and I would like to hear more from you! Let's continue the discussion in this forum :)

Friday, November 9, 2007

Weekly Reflection (W12) by Myong Hee

On Thursday (L2 writing & error correction)
Covered Bitchener, Young, & Cameron (2005) and Ferris (2004)

About Design
• Three groups were different; not equivalent (Yao)
• In reality, we are likely to take intact classes and this can create very different contexts; so problematic.
• Another methodological dilemma in error correction: Longitudinal studies (hard to control invariable) vs. One-shot experimental studies (not being longitudinal)
• Pre-test seems missing (Yun Deuk)
• 1st essay can be used as a pre-test or administering a relevant grammar test can be another option.
• Conference for each individual seems very short and not clearly described (Sang-ki)
Probably it is done for a fast check; however 20 min. conference may not be doable.
• Number of errors in Table 1 is just row numbers; so, it is not based on the length of individual essay (Kevin).
• Keeping (internal) ethical requirement for control groups should be considered.

Possible ways for a Better Design
• In Table 2, stating N size may be a good idea
• MANCOVA may be a better tool to analyze since students may have different starting points at the 1st essay, and we have to check individual’s progress in the four essays.
• The design seems complicated, so make the design simpler (one feedback group vs no feedback group)

About Conclusion
• Students showed ups and downs in improving their grammar. Do we need a longer longitudinal study in order to have better ideas?
• Try to look at different categories of errors (rather than all the categories) in order to check students’ improvement
• Conferencing is working, so it should be implemented.
• Treatable vs. not treatable definitions are problematic (e.g., articles in English grammar). These terms are based on intuition.

MyongHee’s thought: In most studies in this area, NS teachers provide error correction in NNS students’ L2 writing. However, in reality, most English teachers in EFL contexts (e.g., Korea, Japan, & China) are NNSs whose English language skills including grammar are not perfect. I wonder to what extent the findings/suggestions and studies of these NS teachers-NNS students studies in written error correction are applicable/relevant to NNS teachers-NNS students contexts.

Weekly reflection (Nov 5th & 8th) by Yuki

Commentary for November 5th and 8th (by Yukiko)

Sorry for the late reflection notes. Please feel free to comment or modify the text below.


The reading topics of the week were on socio-cultural perspectives to collaborative learning process. Socio-cultural theorists view that learning is socially constructed and is mediated by symbolic artifacts (e.g., language). The following articles were discussed in class:

Tuesday:

  • Aljaafreh, A., & Lantolf, J. P. (1994). Negative feedback as regulation and second language learning in the zone of proximal development. Modern Language Journal, 78(4), 465-483

  • Nassaji, H. & Swain, M. (2000). A Vygotskian perspective on corrective feedback in L2; The effect of random versus negotiated help on the learning of English articles. Language Awareness, 9, 34-51.

  • Nabei, T., & Swain, M. (2002). Learner awareness of recasts in classroom interaction: A case study of an adult EFL student’s second language learning. Language Awareness, 11(1), 43-63.

Thursday:

  • de Guerrero, M. C. M., & Villamil, O. S. (1994). Social-cognitive dimensions of interaction in L2 peer revision. Modern Language Journal, 78(4), 484-496.

  • de Guerrero, MCM and Villamil, OS (2000) Activating the ZPD: Mutual scaffolding in L2 peer revision. Modern Language Journal, 84(1), 51-68.


1. Summary and commentary on Tuesday discussion

Researcher’s background:

  • M. Swain: She shifted her perspective towards L2 learning from cognitive-interactionist to Vygotskian socio-cultural perspective. Nabei and Nassaji were her PhD student.

  • J. Lantolf: His recent research interest includes linguistic typology and gestures. Aljaafreh was his PhD student.

The fist two studies (Aljaafre & Lantolf, 1994; Nassaji & Swain, 2000) were guided by Vygotsky’s notion of zone of proximal development (ZPD) in analyzing the learning process during tutor-learner dialogic collaborative activity. In this framework, it is considered that the feedback prompt teachers/tutors provide mediates learning (microgenetic learning: learning within a short period of time). Aljaafre and Lantolf provided a very useful framework for identifying twelve levels of implicit to explicit regulatory strategies for feedback prompts (Most implicit prompt: A learner identify error independently; Most explicit prompt: A tutor provides examples of the correct pattern after explicit explanation). Their framework seems very practical and can also be used as a guideline for “graduated” feedback in instruction. David commented that it is natural that many teachers adjust to learners’ ZPD in order to provide effective feedback. In classroom, within peers, different people fill in and contribute to reach an understanding from different ZPD starting points. It’s impossible to adjust to different individual ZPDs in a teacher fronted classroom.

Based on Aljaafre and Lantolf’s (1994) ZPD scale, Nassaji and Swain (2000) examined the difference in learning articles between a student who received corrective feedback with graduated contingent help within ZPD (ZPD condition) and a student who received a randomly gauged help (non ZPD condition). In the random ZPD condition, Nassaji and Swain also explored the relationship between the quality of help provided on each article error and the performance of accuracy in the final test for each article error. They concluded that (a) graduated help was more effective than random help, and (b) that more explicit help produced more accurate results than the implicit help. We have to take these results with caution because of the following reasons:

  • The two learners seemed to have different proficiency levels in terms of article use from the beginning. The ZPD student made 28 article errors across four tests, while the non-ZPD student made 20 errors (in one wring, the non-ZPD student had perfect article use).

  • The final test involved an article cloze test of their original writing. The indicator of learning was the accuracy ratio of the article use in the final test for each passage (the number of items ranged from 1 to 12). Since the number of items was so small that concluding the results with proportion score may be misleading. While it seems like a good idea to construct exactly how students learn based on their own writing, from a psychometric viewpoint, the tailored test may not be sufficient to provide trustworthy quantitative evidence.

Some suggestions were made to improve the design of the study. It would be more convincing if there was another article cloze post-test with a passage other than their original writing. Using different passage will potentially show the transfer of learning.

Despite above remarks, the study showed insights into the dynamic and contextualized nature of corrective feedback. Lourdes commented that we can assess necessary teacher/tutor/expert engagement and help the learner needs using the ZPD scale. It will be interesting to see how some students may require more mediation to self regulate and appropriate one type of information.

Nabei and Swain (2002) does not mention social-cultural framework, but looks at the learner as an agent. The learner in the case study “chose when to make use of the learning opportunities presented to her” (p. 59), willingly tuned in and out of the learning context, and was more engaged and cared more during group environment. Nabei and Swain concluded that the reaction to recast a student received was affected by the discourse contexts (e.g., teachers’ intention, group vs. teacher fronted) and learners’ orientation. A close attention to learner agency and orientation, and contextual factors is called for.

Lourdes pointed out that Nabei and Swain’s study is a good example providing thick description of the learner and the interactional context.

2. Summary and commentary on Thursday discussion

A. On IRB issues

We had extensive discussion on IRB issues. Here are some tips getting IRB approvals. Please check with IRB office directly for accurate procedures and information.

(a) Which category do we submit?

  • If it's a regular educational intervention/instruction and it has no potential harm, usually, our research falls under "exemption" category. You need to go through IRB's criteria to see if your study meets their eligibility criteria for exemption.

(b) Research involves your own students and the intervention is part of your regular teaching practice.

  • Get approval to use classroom data (e.g., writing, grades, homework, etc.) from your students after providing them course grades.
  • Ask IRB officer to come to your class at the beginning of the semester to collect information on consent/non-consent to participate in research. IRB will keep the information until the grading is done. Later, you can use the permitted data for your research.

(c) Is it ethical to provide extra credit to those who participated in research?

  • Minimal compensation is fine. You cannot withhold the compensation you promised to provide, even if your participant decided to withdraw in the end.


The rule of thumb is to ask for IRB approval before you start conducting your study!


B. Summary and commentary on the article discussions

Both 1994 study and 2000 study by de Guerrero and Villamil are part of a large scale project involving 40 dyadic (one reader and one writer) interactions during peer revision of their writing.

One of de Guerrero and Villamil’s contributions in 1994 study was their coding system specific to peer interaction for revising writing. They created an analytic framework for categorizing episode type (on-task, about-task, and off-task), interactional types within on-task interaction (e.g., reader writer interactive revision, writer teacher interaction), cognitive stages of regulation (object regulated, other-regulated, and subject-regulated), and social relationships (symmetrical vs. asymmetrical).

Peer review sessions of 27 pairs of ESL writers revealed that most students remained on-task, engaging in complex and productive interaction, and self-regulating themselves depending on tasks and roles (reader or writer). It was interesting to see the fluidity of regulation types, as well as the effect of social relationships on cognitive stages of regulation.

In 2000 study, one dyadic pair uncovered how the reader mediated the revision by flagging problematic phrases and linguistic errors, providing instructions and models, and giving supportive comments. The revision process revealed how social interactions shape the revision of the text and how personal and affective exchanges provide lubrication for equal self-regulations and commitment to task accomplishment.

In both studies, L1 use was considered as a mediating tool and resource for the learners to achieve higher level mental operation. The use of L1 will depend on the dynamic and the purpose of the class; however, it can be a facilitation tool for learners.

In collaborative work, regression is natural in dynamic learning process, as students may not come up with the right answer and/or solution. However, learning does not happen in a linear fashion, thus de Guerrero and Villamil conclude and I concur that peer collaborative work can provide learners with opportunities to appropriate learning strategies and tools which learners can eventually use on their own problem solving.

Wednesday, November 7, 2007

Weekly Reflection (W12) by Myong Hee

On Tuesday (L2 writing & Error correction)
Covered Hyland (1998), Hyland (2003), Hyland & Hyland (2006)

Use of feedback
Individual difference in the amount of feedback used and preferences

• Maho – Received more overall feedback but fewer usable ones
Did not incorporate teacher’s feedback much (22%)
Preferred feedback on her ideas (less priority for grammatical accuracy)

• Samorn – Used teacher’s feedback more (82%)
More concerned about grammar & interested in improving this aspect
However, lost her confidence in grammatical competence at the end of course

Dealing with plagiarism
• When citing in writing, students may have different assumptions and practices due to their cultural backgrounds
• Teachers may react differently due to individual and cultural differences.
• When providing feedback regarding plagiarism, which is better: direct vs. mitigated
• A teacher should have been more direct, explicit in dealing with Maho’s unacceptable behavior since she may not be aware of its consequences (Kevin)

Issue of mitigation
• When to be ‘mitigated’ and when to be ‘direct’?
• Teachers need to clarify confusion about feedback (Yukiko)
• Teachers’ indirect feedback seems confusing & not effective based on her own experience as a L2 writer (Yun Deok)
• Sangki – degree of mitigation may be varied depending on types of errors

Lourdes’comments: Terms ‘treatable’ & ‘nontreatable’ by Ferris (1999) are not grounded.

Issue of revision
As a teacher
• For providing good feedback, teachers may take time; the may have better ideas once they knows about their students. Likewise, training may help for effective peer feedback
• When providing feedback, it involves 2 things: (a) how do I do it (b) knowing what student’s intent was

As a student
• Form-focused feedback may not bring in global level revision (Luciana)
• Revision is high level skills: it may involves beyond feedback & not directly related to feedback
• A good writer takes feedback as prompts to generate better ideas to revise the whole text

Lourdes’ comments: We may need future studies on students’ revisions skills or training to revise

Questions to think about
1. Observations in which you recognize yourself as a writing teacher
2. Observations in which you recognize yourself as a writer

Sunday, October 28, 2007

weekly reflection (week 10) by Ping and Yao

I have to thank Yao for taking careful notes in class. I only tweaked it by adding my own notes. I hope the commentary below grasps most of the main ideas we discussed in class. Please feel free to comment on anything I missed. Thank you.

Q: How refined should the transcription be?

From a CA perspective, any pause can carry interactional meaning. It's impossible to determine what a pause means beforehand. The entire interactional context, including prosody, gestures, pauses, and gaze, should be taken into account. Therefore, a careful transcription is necessary. The decision on how refined the transcription should be is really based on the researcher's analytical purposes. Transcriptions are always selective. The bottom line is to be as much faithful as possible to your data. And the transcription system should be consistent in order to avoid confusion. The final transcription is always the result of a series of compromises between faithfulness to the data and the readability of the transcription.


CA's emic interests: studying behavior from inside a particular system, looking at the subsequent turns to interpret the intended actions to be achieved

Emic vs. Etic

This emic vs. etic distinction comes from the study of anthropology.
emic: looking at the data, let the category emerge from the data. The term 'emic' comes from phonemic: study of sound as they represent category that can form contrast (e.g., very vs bury) v and b doesn't make phonemically difference but probably make phonetic difference. Meaning comes from within the participants in the context. From the participants' own perspective, analysts examine what is achieved in the sequential development and how meaning is made relevant to participants. In other words, meaning is situated in the context and can't be described outside the context.

etic: researchers impose the categorization onto the data. It comes from phonetics, study of sound as pronounced physically. The etic viewpoint refers to meanings from the outside perspective but not from the participants.

Hauser (2005): there are possibilities of interpretation and the teacher impose his/her interpretation but we don't know whether it's the intended interpretation is not known. Whether the meaning maintained depends on how the meaning is negotiated.

Q: how can we interpret the action of the participations?

Lourdes: It's like postmodernism: There is no truth out there. There is not the meaning of utterance independent of the context. Without looking at the context, it's impossible to understand how meaning is constructed through the sequence of utterance. In Hauser (2005), he is saying “this is one possibility” “this is my interpretation of this” He's trying to make the best proximal interpretation, but in the end there is no truth. Conversation analyst walks a fine line, it causes the tension how to use the evidence to make the interpretation, which can not be validated.

Ping: It helps to have outsiders to review the data together. It helps me to refine the data and consider alternative interpretations.

Yuki: Two types of approaches are possible:
interpretive approach
emic approach
Lourdes: However, critical conversation analysis is not real conversation analysis.

Subject: intersubjectivity between participants
objectivity is ideal.
Lourdes: Though there is no single true interpretation, but we need to interpret by the best methodology possible. It's a tension between taking “analyzing turn” as the universal methodology to understand “interaction”,while at the same time “turn” can not be interpreted without tightly linked to the context.
In CA studies, there's a lot of hedging in the interpretation, which indicate the author is presenting one interpretation, not the other.

Sangki: the main idea of the article is that meaning is always co-constructed and can not viewed without looking at the context.

Ping: For CA studies, there's a lack of longitudinal works, looking at the same phenomena over time. Also, there is this concern of how to apply results to the pedagogical context..

Lordes: Koshik (2002) is very pedagogically oriented. There's a lot more to meaning. We shouldn't underestimate the actual richness the interactions of how corrections are given and taken. It's a healthy reminder that we shouldn't just do counting. So we have one extreme of analyzing every second of turn to analyze meaning and another extreme of completely deviating from meaning and just counting the number of corrective feedback.
Koshik (2002) talks about how the teacher upgrades and downgrades assistance including the prosody cues. The idea that assistance is incremental is interesting. The utterance was designed to be incomplete to prompt self-correction. CA is all about local sequential context of interaction: How things unfold and build-up to something. For this reason, CA could be used to look at when the assistance is needed, when it should be upgraded, and when it should be withdrawn in the learning process. This is relevant to our reading for next week.


Ending comments from Lourdes: It's interesting that we don't treat other approaches (statistics, cognitive) as marked, but treat CA as marked (too much jargon). The truth is all approaches have their own jargon and they are equally ratified. So when we choose our approach, we don't treat it as default. The approach has to be a good match with the researcher.

Tuesday, October 23, 2007

Weekly Reflection (W9) by Hung-Tzu

This week, we continued our oral update on the projects. Four presentations with diverse topics were presented.

Project by Sangki and Mune

Conceptualizing agents in discourse and frequency effects in English L2 learners’ overpassivization errors: A replication and extension of Ju (2000)

Based on cognitive explanation proposed by Ju (2000), overpassivization errors will be examined against three independent variables, causation types ( 2 levles: external vs. internal causers), word token frequency (2 levels, high frequency verbs vs. low frequency verbs), and types of unaccusatives (2 levels, alternating vs. nonalternating unaccusatives). A grammatical judgment test will be used to test intermediate learners (N=20), advanced learners (N=20), and native speakers (N=10).

The class had a brief discussion on the grammatical judgment test including what the participants were asked to judge the sentences, the specific items on the test, and also the distracters included. Since overpassivization is an error commonly found in advanced learners, learners will not have overpassvization errors untill they have passive knowledge. The distracters are designed to test learners’ past knowledge. No errors on overpassivization might either mean that learners are so advanced that they have no problem, or that learners may simply have not acquired passives yet.

Project by Luciana

Focus on form and self repair: Some insights into foreign language learning

Luciana gave us a brief report on the dissertation that she is currently working on. The study asked the following five research questions (1) How do task types influence focus on form and self-repair? (2) To what degree does learners’ proficiency level affect their focus on from and self-repair? (3) What is the nature of linguistic knowledge targeted in focus on form and self repair? (4) How does the interaction depth influence focus on from and interaction pattern? (5) Do learners perform similarly in focus on form and self repair?
This is a very rare study since Luciana looked at group interaction instead of dyads. From her preliminary data analysis, the class suggested using medium to look at group distribution and also individual learner participation within group activities. We also discussed the term ‘depth of LRE’ and it was mentioned that a lower inference label closer to how the study is operationalized such as length of LRE might be able to avoid mis-interpretation from readers.

Project by David

David presented an interesting CALL project in which an alien interacted with learners and give feedback on errors through negotiation of meanings. Suggestions on how the alien project could be expanded included tracking student responses after feedback is given, choosing more generative target structure for the study and providing theoretical ground for this alternative way of giving feedback (i.e. justifying the pedagogical reasons). Yao mentioned that this type of study might be related to human computer interaction or ethnographic research within computer environment. The followings are references that Yao sent to the class list.

Hampel, R. (2003). Theoretical perspectives and new practices in audio-graphic conferencing for language learning. ReCALL, 15(1), 21-36. CALICO (Vol. 20, No. 3); PujolĂ  (2001) and Bangs (2003); Toole and Heift (2002); Heift (2003).

Project by Dan

Promoting grammar awareness with color-coded feedback

Dan reported his pilot study of color-coding method to give feedback on student writing. The interview from this pilot study revealed that students perceived the system as beneficial in terms of raising meta-awareness. With the short treatment period, Dan suspected that there might not be significant improvement in accuracy of writing, yet, the followings are possible ways to demonstrate the benefit of the color-coding system. (1) Looking into students’ self revision ability at the beginning and at the end of the color-coding treatment might be a way to quantify student learning. (2) Survey designed with Likert scale asking students’ preference in receiving feedback both before and after the treatment. (3) Semi-structured interviews eliciting information on how learners engaged in the revision process using the color-coding system.

Followed by the discussion on the project, we did a hands-on activity applying Dan’s color-coding system to student writing. The activity led to discussion on realistic classroom problems such as how much feedback to give and the many decisions that teachers go through when giving feedbacks.

Sunday, October 14, 2007

Weekly Reflection (W8) by BoSun

This week we started with oral presentation for final project. On Tuesday, the presentation was ordered as follows: Hung-Tzu, Kevin, Yun-Deok and BoSun. On Thursday, Ping, Sorin, Yuki, Myong Hee presented their research proposals. The topics varied and here are the projects sorted by mode of feedback, i.e. oral feedback vs. written feedback.
-------------------------------------------------------------------------------------
Research project addressing written feedback: Hung-Tzu, Kevin and Yuki

(1) Error correction in L2 writing: How successful are students in revising lexical errors? by Hung-Tzu
Her research will deals with written error correction and students revision of lexical errors with use of three different strategies (thesaurus, online dictionary and collocation dictionary). The rationale for her research is that it is necessary to examine lexical errors separately form grammar errors since written feedback literature revealed that the effectiveness of feedback types and learner’s ability to revise differ depending on the types of errors, i.e. lexical vs. grammar errors (Ferris &Robers, 2001; Gaskell &Cobb, 2004; Ferris, 2006).

The participants are 44 ESL students of UHM at two different levels (20 intermediate and 11 advanced learners) and they were taking academic reading courses with a focus on vocabulary learning. The procedure is as follows; 1) the participants completion of writing task followed by reading activities, 2) teacher’s provision of indirect feedbacks (underlining) for five lexical errors 3) student’s revision of their own writing using one of the three tools (thesaurus, online dictionary and collocation dictionary) 4) students’ reflection and evaluation of their writing and revision process. The data is 155 first drafts and 155 revised drafts including 775 lexical errors. The data will be analyzed by use of concordancer to examine distribution of the errors and learner’s repair depending on their strategy.

(2) Indirect error correction and improving grammar in L2 writing By Kevin
His research questions are 1) can indirect errors correction lead to improved performance on certain grammatical constructions on first drafts in an intermediate L2 writing class? 2) does indirect correction affect different grammatical constructions differently? His assumption is that indirect feedback involves depth of processing which encourages students correct their errors better.
The participants are 15 university-level ESL students aged 18-24 with various L1 background. At the time of data collection, they were enrolled intermediate writing class in the HPU focusing on grammatical accuracy. Three drafts were collected; teacher gave indirect feedback for first and second draft, students revised their first and second drafts and resubmit the drafts (1st draft-feedback-2nd draft-feedback-3rd draft). He reported that a rage of grammatical errors included verb form, verb test, incorrect articles, etc and the number of errors reduced by the third draft.

(3) Enculturation into academic discourse: focus on deficiency or agency by Yuki.
She is planning to conduct two studies focusing on the writers’ enculturation into academic English writing with two different data; one from university level- ESL writing classes in Hawaii and the other from her own writing. At the first project, she is addressing how contextual factors shape students into the academic discourse community. The participants are divided into two groups: 21 graduate students enrolled at advanced college academic ESL writing course and 22 undergrad freshmen taking freshmen composition course. The data will be analyzed for 1) types of feedback, 2) incorporation of feedback by types, 3) thematic analysis of students’ perception of their language, content, and rhetorical style development.

At the second project, she is carrying out longitudinal study of her own negotiation and enculturation process into disciplinary scholarly writing. For data analysis, she is employing autoethnography.
-------------------------------------------------------------------------------------
Research projects about oral feedback: YunDoek, BoSun, Ping, Sorin and Myong Hee

(1) Which on can language learners rely on best, recasts or prompts, with relation to learner’s perception? By YunDoek
Her study addresses relative effectiveness of recasts vs. prompts on L2 learning in accordance with learner perception of the feedback in classroom settings on both short and long term basis. Her research questions is tackling following issues 1) the level of learner’s immediate uptake in response to recasts and prompts 2) the level of learner’s uptake for recasts and prompts on a long term basis 3) difference between immediate and delayed-post test performance for recast and prompts 4) similarities and differences between teachers’ and students’ preference for different feedback types across different linguistic items.
The participants will be students from HELP in UHM. The design will be both descriptive and experimental, adopting treatment and pre-post test. The participants are divided into control and treatment group, take the pre and post test, and go through treatment either recasts or prompts between pre- post tests.

(2) Reexamination of sub-categories of recasts and learner uptake by BoSun
She assumes that recasts constitute continuum with explicit and implicit end based on oral feedback literature (Sheen, 2007) and is tackling following issues 1) do 4 different types of recasts adopted from Lyster & Ranta (1998) enhance the acquisition of L2 grammatical structure 2) what characteristics of recasts lead to learner uptake and repair better? 3) Do 4 different types of recasts result in different effects depending on the learners’ proficiency?
The research will be descriptive study with two different levels (intermediate and advanced) of English classes (one from HELP and the other from ELI) in UHM. The data will be analyzed by use of coding scheme from Ryster and Ranta (1998) which sub-categorized recasts into 4 types: isolated declarative, isolated interrogative, incorporated declarative, and incorporated interrogative, depending on intonation (falling vs. rising) and existence of additional information (with or without additional information). The measure for acquisition is uptake, which is defined as learners’ immediate response followed by teachers’ recast (Lyster & Ranter, 1997). Uptake is again sub-divided into two categories: repair (correction) and needs repair (acknowledgement of errors).

(3) Implicit/ Explicit recasts, learner’s responses to recasts and linguistic development by Sorin.
She is examining 1) which types of recasts (implicit vs. explicit) leads to more learner uptake and subsequently more linguistic development of the target structure 2) whether primed production in response to recasts occur, if so, which types of priming leads to more frequent primed production.
The participants are KSL learners in Korea. Her research design is quasi-experimental wit pre-post-delayed posttest. The target structure hasn’t been finalized yet, and she is considering relative clause to be one of the good candidates. Coding scheme will follow the one from Lyster and Ranta (1998) and two out of the four types of recast will be chosen for implicit and explicit recast. The measurement will be uptake and primed production. Uptake will be operationalized as a student’s utterance immediately following the teacher’s feedback (Lyster & Ranta, 1997) and primed production is defined as learner’s new utterance using target structure form provided in the recast within six turns of recast, adopting McDonough & Mackey’s (2006) definition.

(4) Organization of error correction sequences in form-focused classroom by Ping
Her research questions are 1) what are the different types for corrective feedback in form-focused classroom? 2) does the classroom context influence students’ orientations to the corrective feedback? The participants are 14 students’ in Chinese 101 class (at beginner level) at UHM, The data were analyzed by use of CA. She has found that 1) teacher prompt and learner production was predominant sequence 2) other-initiated other repair showed high frequency whereas other-initiated self-repair and self-initiated other repair displayed low frequency. 3) self-initiated self-repair is rare.

(5) Investigation of small group interaction in a Korean university EFL classroom by Myong Hee
Her research is dealing with 1) types of collaborative learning and their distribution 2) the level of uptake for each category measured by repair and needs-repair. The participants are 24 students, enrolled at a college English reading course. The data is small group interactions (6 triads and 3 dyads) tape-recorded which lasted 12-18 minutes for each. Data analysis will adopt quantitative and qualitative analysis. Quantitative analysis focuses on distribution of various types of peer assistance and qualitative analysis will examine 1) co-construction 2) encouragement to topic continuation 3) self-correction with use CA analysis.


Tuesday, October 2, 2007

Weekly Reflection (W7) by Sorin

On Tuesday, we started with small group discussions over (1) two points that we either agreed with the article or thought to be important or valuable points from the article, (2) two points that we disagreed with or had reservations about, and (3) 2 points that were beyond the review (which they missed or couldn’t see in 2001, when the article was published).

(1) Agreed or Important/Valuable points
The first group pointed out that the effectiveness of recasts is affected by the target structure of the study in relation to the developmental readiness of learners. In other words, whether learners are ready to acquire the target structure or not will affect the effectiveness of recasts. The second point was made on the types of recasts. Recasts can vary; it can be either explicit/ implicit and provided with or without emphasis utilizing nonlinguistic cues (as Chaudron suggested). These differences can definitely have an impact on the students’ noticing of recasts and consequently the effectiveness of recasts. The third point made was that investigating “private speech” as an indicator of students’ noticing of recasts (Ohta, 2000) was interesting. It could be an interesting measurement of students’ noticing of recasts, however, no research has investigated private speech since Ohta.

Prompted by the third point, we started to discuss the ways of assessing the effectiveness of recasts (or in other words, success or impact of recasts). We’ve looked at the definition of “uptake” made by Lyster & Ranta (1997) on page 739, and found that uptake can be a very slippery term since it covers a range of students’ responses from a simple acknowledgement of recasts to a repetition of recasts (which is called as “echo”) and students’ self repair. Then we moved to the measures used in L1 studies (on page 750). In early L1 studies, children’s imitation of the adult’s recasts was often sought as an evidence of the effectiveness of recasts. However, later on researchers started to investigate emersion of targeted structure in children’s subsequent utterances. On the other hand, in L2 reserach, various measures were used, as follows:

- Interlanguage change: It was investigated mostly in laboratory studies, at least in short term period, through pretest and posttest design (sometimes with delayed posttest).

- Immediate reactions to recasts: uptake, repair, etc.

- Private speech: It was used only in Ohta’s study (2000), in whihc students were more likely to react to recasts when it was addressed to another learner or to class than to them.

- Learners’ perception: Learner’s perception of recasts was investigated through stimulated recall (Mackey, Gass, & McDonough, 2000).

- Primed production: Primed production is learner’s production of a new utterance using the target structure in a few turns after recasts, and was first investigated by McDonough & Mackey (2006). In their study, learner’s production within six turns after recasts was examined.

Another important point made in class was that the effectiveness of recasts can vary depending on the setting of the study as follows:

- Intensive vs. Extensive: Regarding the density of feedback provision (how many errors were corrected and how often those errors were corrected)

- Specific vs. Broad: Regarding the range of forms corrected (whether only errors on particular structures were corrected (Doughty & Varela, 1998; Ortega & Long, 1997) or a broad range of errors were corrected (Lyster & Ranta, 1997; Oliver, 1995))

- Proactive vs. Reactive: Regarding the existence of pre-selected target structures in studies (whether there was specific target structure to teach or no particular structure was pre-selected and thus correction was incidental to learner errors)

- Communicative vs. Formal (overall context): Regarding the nature of classroom setting (whether the nature of classroom is communicative, content-based (immersion), or formal; whether it is in foreign language context or second language context, etc.)

- Relational feedback by teacher vs. Detached or predominantly cognitive feedback: Regarding personal, affective, and social factors affecting the dynamics of interaction (whether fine-tuned feedback was provided in consideration of the socioclutural factors of the preexisting relationships or predominantly cognitive feedback was provided disregarding these factors, most likely among the participants with no prior relationships)

What constitutes positive evidence and negative evidence was the last important point made in class. Even though several studies (Iwashita, 2003; Leeman, 2003; Long et al, 1998; McDonough & Mackey, 2006) tapped into this issue, no study has provided a review of this line of research. It seemed like that recasts provide both negative evidence and positive evidence at the same time but recasts make positive evidence more salient according to Ortega. In addition, the effectiveness of recasts comes from both positive and negative evidence.

(2) Disagreement or Criticism
The first point addressed was that there was disagreement in the definition of recasts, which inevitably caused comparability problem among the recasts studies. Also, the findings of L1 studies cannot be compared to the results of L2 studies (Sangki). In addition, no study has ever paid attention to paralinguistic cues provided with recasts. Second, the narrative literature review of this article seemed to be limited in synthesizing the findings of the previous studies, even though it was well written and helpful. This article can be regarded as authoritative review which was written by renowned scholars. Yukiko suggested to read the last chapter of Mackey’s forthcoming book, and we will read *Russell & Spada (2006) as well. Kevin also mentioned that it seemed that there are more similarities in L2 studies than L1 studies in recasts disagreeing with what the authors suggested in the article.

(3) Beyond the review
No study in recasts has investigated paralinguistic cues provided with recasts except Sheen (2006?), and thus we need to take into consideration the paralinguistic cues as well in our studies by analyzing oral or audio-visual data. In addition, the effect of students’ familiarity with teacher’s teaching style on recasts was not much considered in previous studies. Chaudron's dissertation was the only study showing that teachers less corrected learner errors at the end of semester than the beginning of the semester, and some studies on motivation have shown that there was fluctuation of motivation during a semester. Thus, it would be interesting to see how various aspects of recasts change as relational aspects of classroom setting change. Third, L1 studies showed that as children grow older (in other words, as their proficiency developed), the provision of recasts was decreased. Thus, it would be worth collecting classroom observation data across entire curriculum and investigating how the amount of recasts changes as students’ proficiency develops. Another point made by Myunghee was that since most of the studies in recasts have looked at NS-NNS interaction, it would be interesting to examine NNS-NNS interaction.


*Russell, J., & Spada, N. (2006). The effectiveness of corrective feedback for the acquisition of L2 grammar: A meta-analysis of the research. In J. M. Norris & L. Ortega (Eds.), Synthesizing research on language learning and teaching (pp. 133-164). Philadelphia: John Benjamins.


-------------------------------------------------------------------------------
Ellis, R., & Sheen, Y. (2006). Reexamining the role of recasts in second language acquisition. Studies in Second Language Acquisition, 28, 575-600.

Iwashita, N. (2003). Negative feedback and positive evidence in task-based interaction: Differential effects on L2 development. Studies in Second Language Acquisition, 24, 1-36.

On Thursday, we started with a small group discussion on two things that were already brought up in Nicholas et al. (2001) and two things that were forward-looking agendas in Ellis & Sheen (2006). After a group discussion, we had a whole class discussion. Then, Noriko Iwashita came in and told us about her study (Iwashita, 2003) and her experience of conducting studies in error correction: difficulties, concerns, and helpful tips.

(1) Issues already brought up
The first issue pointed out in both articles was that recasts are ambiguous since they are not always noticed by learners as corrective feedback and thus they are considered to be less effective compared to other types of feedback moves. However, the next question raised by Ortega was that the fact that learners miss the corrective function of recasts (i.e. not perceiving them as corrective) does really mean that they do not perceive it? Ellis and Sheen claimed that “whether recasts afford positive or negative evidence is tied up with how learners interpret their illocutionary force” (p. 585). Put another way, if learners do not interpret recasts as corrective, then recasts only serve as positive evidence. On the other hand, if learners perceive them as corrective, then they provide negative evidence. We agreed that overall learner’s orientation toward interaction is important. However, our conclusion was that recasts do not necessarily need to be perceived as didactic to be regarded as a source of negative evidence. Definitional differences in recasts studies were also addressed in both articles. Not only definitional differences but a variety of recasts (different types of recasts) was also mentioned, although Ellis & Sheen provided more expanded discussion on this issue. Besides, the two articles dealt with the role of uptake; uptake cannot be an evidence of acquisition even though it can be an evidence of noticing.

(2) Forward-looking agendas
How to define implicit or explicit recasts and how to operationalize the degree of explicitness was the first agenda pointed out. Ellis & Sheen pushed this issue forward by discussing different types of recasts. Second, Ellis & Sheen elaborated the importance of sociocognitive perspective, which was also discussed in Nicholas et al. We will read some of the chapters, on this issue, from Hyland & Hyland book (2006) later on. Third, they further expanded the argument about the effectiveness of recasts on acquisition and suggested that we should consider not only whether recasts facilitate acquisition but also when and how they do so. Adding to this claim, Ortega proposed that we should also compare the effect of recasts with other types of feedback moves in pursuing this issue of not only “whether” but also “how” and “when”. There are variations in each corrective feedback moves, and it could be the case that types of feedback do not really matter. What really matters and what requires further investigation can be some other level of abstraction, such as the degree of explicitness, cutting across different moves instead of classifying different moves. Forth, Ellis and Sheen seemed to conclude that explicit correction is more effective than recasts and the more explicit a correction move can be, the better it is. With this claim, Yundeok raised a question; if explicit correction only leads to building explicit knowledge, what about implicit knowledge? Ellis, however, didn’t really mention this issue in the article, and he seemed to be just concerned with acquisition benefit of recasts in general sense. Fifth, learner’s orientation to discourse (whether learners see language as an object or as a tool to convey meaning) was more elaborated in Ellis & Sheen. It is possible that, if learner’s orientation is toward accuracy, then they will perceive recasts as corrective, even though the context is communicative as Lyster (2006, 2007) suggested.

In addition to these agendas, several important and interesting issues were discussed. With regard to target structures, Nicholas et al. concluded that recasts are more effective with already known forms (p. 730, 752). On the other hand, Lyster hypothesized that recasts are more effective with new forms which haven’t learned, and prompts (or models?) are more effective with already learned forms. Thus, further investigating this issue, confirming any of the hypothesis, would be worthwhile. Another interesting claim Kevin brought up was that fewer recasts can make it more salient, which was also suggested in Nicholas et al (p.743, p.728). In L1 child acquisition, parents rarely recast their children’s utterances thus making recasts easy to notice for children. However, in L2 classrooms, recasts are too frequently offered and thus making them less “marked” instead of being salient Furthermore, parents recast incorrect utterances more frequently than repeat correct utterance, and children repeat corrective recasts more than mere confirmation of their utterances (Nicholas et al., p. 726, 729, 740, 751). Therefore, it would be interesting to examine the frequency issue as well as uptake after corrective utterances versus non-corrective utterances. As last issue, Yundeok pointed out that studies haven’t really looked at teacher’s perception of recasts but only learner’s perception, except Nabei & Swain (2002), which we will read later on. Adding to this comment, Ortega suggested that it would be best to include pre-posttest gain, learner’s (and possibly teacher’s) perception, and discourse data in recasts studies. Up to now, studies have examined only one of them in isolation, except the only exception of Iwashita (2003): she included pre-posttest gain and discourse data. Besides, Ortega mentioned that primed production as shown in excerpt 1 (p. 576) would be new benefit of recasts, what researchers were looking for.

(3) Noriko Iwashita
The hardest part of conducing studies in error correction for her was to code and to analyze data, particularly positive and negative evidence, reaching high intercoder reliability. In her study, she defined positive evidence as instances that NS initially use target structures or vocabulary followed by a NNS’s targetlike or incomplete utterance providing a target model of structures in focus. Selecting appropriate target structure for beginning JSL learners was difficult as well; she was looking for a new structure, however, the learners already had learned most of the structures, even though they were in the second semester of learning Japanese (less than 20 weeks of instruction in pseudo-communicative classrooms). She also faced a problem with developmental readiness with the progressive –te verb form. While some of the students understood it as a structure, other students learned it as words (like chunks), which brought another problem when she coded the data. She also advised us to consider what kind of statistics to use for data analysis in advance and to report individual data in our studies as well.

In order to help your understanding, the followings are a brief summary of her study. In an experimental study, she investigated the role of task-based conversation in the development of two Japanese structures (locative initial construction and a progressive verb form) by 55 L2 learners of Japanese, focusing on positive evidence and negative feedback. What made her study special was that she did not only examine pretest and posttest gain but also analyzed interaction data. Analyzing the interaction data enabled her to identify three types of positive evidence (completion, translation, and simple model) and two types of negative feedback (recasts and negotiation) provided by NS interlocutors during interaction. Among the moves, models were the most frequently provided one followed by recasts. Task-based conversation was proved to be effective for the JSL learners learning the two target structures. However, mixed results were found regarding the effectiveness of positive evidence and negative feedback. Models (positive evidence) were found to be effective on locative construction only for the learners with above-average scores on pretest (at the threshold level of proficiency) while recasts were effective on the progressive –te verb form regardless of learners’ current mastery of target structures.

Thursday, September 27, 2007

Weekly reflection (Week 6) by Sang-Ki

We covered four studies this week: Oliver (1995), Lyster & Ranta (1997), Ortega & Long (1997), and Doughty & Varela (1998).

The comparability is always a thorny issue, but we could still be better informed about the role of feedback by trying to compare each study’s design and main findings from a single line. The following summary would be insightful from that sense:

-----------------------------------------------------------------

Oliver (1995)

Laboratory study

Descriptive study (No particular focused targets; No casual explanation)

* Ss: 8-13-year-old ESL learners (16 dyads)

* Two feedback types in focus: Negotiation vs. recasts

* Main findings: Negative feedback was given to 61% of the students’ entire error moves (that is, 39% of error moves were ignored). Negotiation tended to be used when the meaning is opaque to NS interlocutors, whereas recasts were common when the meaning was transparent but the form was problematic. “Negotiation seems to serve to make the picture clear, whereas recasts are like straightening the picture on the wall” (p. 473). NS responses tended to be affected by several factors such as the type and complexity of learner errors. Learners seemed to successfully incorporate negative feedback (35% of recasts were incorporated).

Lyster & Ranta (1997)

Classroom study

Descriptive study (No particular focused targets; No casual explanation)

* Ss: 10-year-old content-based French immersion learners

* 20 hrs classroom observation, 4 classrooms, 4 teachers

* 6 types of feedback and subsequent uptake moves (4 types of repair & 6 types of needs-repair) were in focus

* Main findings: Recasts were used most often (55% of error turns induced teacher recasts). 69% of recasts were unnoticed, only resulting in topic continuation. Of the rest 31% of recasts which led to student turns with uptake, 18% and 13% resulted in repair and needs-repair turns, respectively. Compared to recasts (which only led to simple repetition of previous feedback turns), elicitation and metalinguistic feedback were more effective in that they might cause student-generated repair. More important is the observation that elicitation and metalinguistic feedback did not interfere with the flow of communication.

Ortega & Long (1997)

Laboratory study

Quasi-experimental study (Particular focused targets)

* Other related studies: Long et al. (1998), Inagaki & Long (1998)

* Ss: 3rd semester Spanish (low-intermediate level adult learners) (30 dyads)

* Two feedback types in focus: Recasts (negative feedback) vs. models (positive feedback)

* Targets: Object topicalization & Adverb placement (Both were previously unknown structures)

* Pre-posttest control group design

* Main findings: Both types of feedback did not bring about significant learning of object topicalization. Recasts were more effective than models in learning of the adverb placement rule.

Doughty & Varela (1998)

Classroom study

Quasi-experimental study (Particular focused targets)

* Ss: 11-14-year-old content-based ESL learners

* Targets: past tense –ed & conditional would

* One feedback type in focus: Corrective recasts

* Pre-post-delayed posttest control group design

* Main findings: The focused recasts led to substantial gains on oral mode tests and the beneficial effects were maintained 2 months later. Gains on written mode tests were less robust. FonF is feasible, but should be brief, immediate, focused, and not to be overused.

-----------------------------------------------------------------

After outlining the contrasting features of the four studies, we focused more on Ortega and Long’s (1997) quasi-experimental laboratory type study. We could be aware of the detailed experiment procedures by listening to actual task samples. (It was of particular interest to see that the GJT used by the researchers was not a conventional, decontextualized one).

In light of the study findings, even though the two targets were presumed to be at the same developmental stages, only the adverb placement rule was learned when recasts were provided to Spanish learners. Object topicalization rule might have been too difficult to observe the expected learning outcomes. By contrast, the adverb placement rule, which is more related to lexical items, could have been more learnable. What was interesting from the follow-up interview data was the fact that some learners, although they could not reply to the question as to what they actually had learned overall, tended to state the adverb placement rule accurately, indicating noticing of the rule truly occurred during the task performance.

An improvement of the study design could have been made with delayed posttest measures included. Also, rather than the repeated measures design, a combinatorial design with two feedback types and two target structures would have enabled us to have a clearer understanding of the roles of the two feedback types.

On Thursday, we focused on two of the four studies. Oliver (1995) and Lyster & Ranter (1997) that we covered are all descriptive recast studies. In pairs, we tried to find answers to the following five questions. I am including the answers and ideas we shared as a whole-class discussion format:

1) How frequent was negative feedback in each study?
* Oliver (1995; hereafter O): 38.82%
* Lyster & Ranta (1997; hereafter LR): 62%
* Different task conditions as well as idiosyncratic participant characteristics of the two studies may have resulted in this huge difference in the amount of feedback. For example, in the case of Oliver (1995), it was kids who gave feedback to their peers and the study was laboratory-based, which might have affected the reported smaller amount of negative feedback.
* LR: There seems a difference across teachers in the amount of negative feedback.
* O: Providing raw frequency data (instead of percentage values) would have been more desirable.

2) How was negative feedback provided (how many different ways, what range of explicitness) in each study?
* O: 2 types; negotiation and recasts
* LR: 6 types; recasts, elicitation, clarification request, metalinguistic feedback, explicit correction, repetition; these 6 types of feedback may exist on the implicit-explicit continuum.
* It came to be acknowledged that recasts may take an explicit form.

3) What evidence does each of the two studies consider in order to talk about "effectiveness” of negative feedback? (Do they talk about “effectiveness,” and if so what arguments do they consider?)
* LR: Effectiveness is discussed in terms of the extent of uptake and repair. Particularly, student-generated repair, which is different from uptake and simple repair, is important in telling the effectiveness of negative feedback.
* O: Feedback is available and usable. Negative feedback seems effective from the point that learners tended to incorporate NS’s feedback in subsequent turns (e.g., 35% of recasts were incorporated).

4) How did participants respond to negative feedback in each study? (what did they do with it, if anything?) -- Skipped

5) What did each study have to say about type of error?
* O: Detailed categories of grammatical errors were identified. Recasts were significantly more common than negotiations for errors in singularity, plurality, and subject-verb agreement. For other cases (e.g., Aux/copula, pronoun, word order/omission, word choice, no subject), negotiations were the preferred feedback option.
* LR: Grammatical errors (50%) are the most common error type, followed by lexical (18%), phonological (16%), and L1-related errors (16%). Lexical and phonological errors (80% and 70%) induced teacher feedback more than grammatical and L1-related errors (56% and 43%). This reminded us the findings from Mackey et al.’s (2000) study, suggesting the saliency and correctability issue in relation to each type of error. Recasts were the more preferred option across different error types, whereas negotiation of form was more common than recasts for lexical-related errors.


Sunday, September 16, 2007

Refletion on Thursday, September, 13th by Yun Deok Choi.

Fortunately, this reflection will be very short one, compared to the last one ;).
On Thursday, we met in a computer lab, 155b at 10:30.
Sang-Ki, David and Bo-sun led the whole lab session, helping us to create a new page where we could upload our own bibliography for research paper and make a link between the page and the home.

First, we made a list of each classmate’s intriguing research topic on a main page as the following:
References
Indirect error correction and its effect on grammar in L2 writing
Peer feedback (Oral)
Relative effectiveness of prompts versus recasts in classroom
Repair in CA
Error feedback in L2 writing: Focusing on vocabulary
Learnability in SLA and overpassivization errors
How different interactional feedback lead to L2 development
Role of noticing in interactional feedback
Implicit error correction and CALL
Relationship between recast and learner's response
Reponses to different types of recasts and L2 development

Then, each classmate created his/her own page with the title that indicates the topic of their references and made links to the home.

In a new page, each person wrote a sentence like “This list of references initially posted by…” as Dr. Ortega suggested. Then, we uploaded our bibliographies. At that time, we encountered one technical problem. That is, several of us brought the bibliographies in a word file by using removable disks. When we copied the bibliographies and pasted them, the word formats were destroyed and we had to rework on it. Some of our classmates kept asking “What did you do?” as they encountered unexpected outcome. It’s because some classmates said that when two or more people work on the site together, that kind of accidents might occur. As a result, the format of each classmate’s bibliography is not uniform. Maybe we should work on it more. That’s all we actually did.
Thank you for your help, Sang-ki, David and Bosun.

Reflection on Tuesday, September 11th, 2007 by Yun Deok Choi

Truscott, J. (1999). What’s wrong with oral grammar correction. The Canadian Modern Language Review, 55, 437-456.
Lyster, R., Lightbown, P. M., & Spada, N. (1999) A response to Truscott’s ‘What’s wrong with oral grammar correction.’ The Canadian Modern Language Review, 55, 457-467.

At the beginning of the class, we started the lesson by appointing each classmate to a certain week when he/she would make a reflection on classroom activities. After deciding that, we moved on to talking about Truscott who is the author of the article “What’s wrong with oral grammar correction.” At first, Dr. Ortega asked Hung-Tzu about his background or career since she had taken his English class in Taiwan. According to Dr. Ortega and Hung-Tzu, John Truscott is originally from USA and has stayed in Taiwan for over ten years. Recently he has interested in cognitive perspective like working memory and wrote an article on meta-analysis on L2 writing which will be published within this year. In addition, he published a couple of articles with Michael Sharwood Smith on interlanguage development. He also wrote some articles on error correction against L2 writing and noticing, aside from the present article.

Before group discussion of the two articles, Dr. Ortega pointed out that Truschott takes the theoretical position of Krashen and Universal Grammar toward SLA in the article, with some explanations on Krashen and his academic point of view in SLA.

For about 10 minutes we exchanged our impressions and thoughts about Truscott (1999)’s article and the responding article by Lyster, Lightbown, and Spada (1999) in small groups, with the goal of formulating meaningful research questions based on the two articles. During whole class discussion, several students presented their own positions whether they agreed with Truscott or Lyster et al. At first, Kevin said that he could not completely agree with either article. Sorin agreed with Truscott’s assertion to some extent from a teacher’s perspective, but she was cautious of his extreme all-or-nothing position against error correction. Besides, Yun Deok agreed with Truscott’s concern about affective aspects in terms of excessive error correction. On the other hand, Myunghi stated that she is in favor of error correction by mentioning her Japanese class in which the teacher always provided correction and she had learned a lot.

Dr. Ortega indicated “Affective” versus “Effective” issues with regard to the article and she stated that Truscott claimed that error correction is not effective and harmful for learners to learn a language. In terms of effectiveness of error correction, Dan mentioned feasibility of research on error correction. Then, Yuki touched upon how we should think about error correction with relation to educational purposes and contextual factors, especially in terms of critical pedagogy.

We criticized Truscott’s confusing and contradictory assertion that he confined error correction to grammar by saying that “a similar case could be made for other types of errors (e.g., in pragmatics or pronunciation” and at the end he suddenly changed his remarks into “the issues involved in correction of errors in pragmatics or pronunciation, for example, differ in some respect in some respects from those I have considered here, so my conclusion should not be casually extended to those areas.” As for his antipodal position, David mentioned that it might have something to do with the process of editing the article. ;)

With respect to his contradictory position, Kevin pointed out the term,“Consistency.” To be more specific, Kevin was wondering if teachers provide consistent correction, is correction effective? Ping also criticized Truscott’s assertion that error correction which is appropriate for one student might not proper for other students by mentioning that individual learner variables. Dr. Ortega also expressed sharp criticism of his unreasonably continuous assertions and brief mention of relative studies for his own sake. She also put her finger on that what Truscott called negotiation of form refers to both explicit error correction and explanation as Sang-ki referred to implicitness and explicitness of error correction with respect to Truscott’s point of view.

With respect to the studies that Truscott mentioned in the article, Dr. Ortega explained Robert’s study which is included in a book (1995) edited by Dick Schmidt. His study is about 5 learners of Japanese L2 and he video taped while a teacher delivered a lecture. Then he asked the learners about what kind of errors they made and what they knew about the errors when a teacher corrected their errors while they were watching the one-hour video tape. The findings of the study are students could not figure out any error correction and what the error correction was about. According to the teacher, it is very earlier study which investigated whether students noticed error correction and understood it. Mackey, Gass & McDonough (2000)’s article in SSLA and Carpenter, MacGregor, & Mackey (2006)’s article in SSLA are very similar to the study in terms of their topic and research method. From this perspective, Dr. Ortega posed a question: Do students have to notice error correction and understand the nature of the correction in order to benefit from it? In order to answer the question, she cited Dick Schmidt’s remark: noticing is necessary but understanding is not necessary.

In this vein, she mentioned “emergentism” and I consulted Longman dictionary of language teaching and applied linguistics (2002) in order to find the exact definition of the term. I hope this will help you to understand what it means. According to the dictionary, “emergentism” refers to the view that higher forms of cognition emerge from the interaction between simpler forms of cognition and the architecture of the human brain. For example, in language acquisition, it has been proposed that categories such as the parts of speech are not innate but emerge as a result of the processing of input by the perceptual systems (cited from Richards & Schmidt, 2002, p177). This point of view leads to studies like McDonough (2006)’s interaction and syntactic priming.

We also talked about Dekeyser (1993)’s study which was mentioned by Truscott. Dekeyser picked up several classes and compared them for whole semester or a year. Dr. Ortega stated that it is a very pioneering study because it first examined error correction in conjunction with motivation.

After explaining above mentioned articles, she also tried to think about whether error correction
should be done across all aspects of language or it should be done on one specific area at a time
from the perspectives of teachers?
We dealt with this question in terms of the following aspects:
-simple vs. complex
-core vs. peripheral
-ready vs. unready

Besides, she also posed the following question: Should we correct any errors whenever they
occur? Or should we make a plan for providing correction for specific aspect of language in
advance? And she stated that it depends on which position we are taking. If we are believers of
“incidental, reactive, on the fly” types of correction, we provide more immediate correction. On
the other hand, if we are believers of “metalinguistic process(understanding),” we provide
delayed correction.

She also explained that, in terms of corrections on written or oral production, writing studies
like Studies like Ferris (1999, 2002, 2004) and Hyland (2006), error correction was provided in
response to more general, overall aspects of language, except for Sheen(2006)’s study. On the
contrary, oral studies like Doughty and Varela (1998), error correction was provided in response
to a specific language structure.

As for the question: How to select errors to be corrected?
In order to answer the question she cited Mike Long’s suggestion. He suggested that we should
consider the following factors:
-useful
-remediable
-pervasive
From this perspective, she also advised us that we should make a decision whether we would deal with overall error correction or concrete error correction when we design our own research.
She also added that if we would concentrate on a certain area, we should gather information what we know about that area. For example, if we want to investigate English morpheme, we should know that learners acquire past tense “-ed” first and then they acquire third person singular pronoun.

At last, Dr. Ortega refuted Truscott’s criticism on Doughty and Varela (1998)’s coding scheme. He critiqued their study did not consider learners’ overuse of target forms; however, as we carefully analyzed their coding scheme together, we found that overuse clearly was embedded in the scheme. Dr. Ortega praised the scheme for its interlanguage sensitive quality and their task essential properties, which means the task provides a lot of obligatory contexts where learners should use target forms. In addition, she advised us that we should have knowledge on a selected form and also come up with this kind of well-designed, feasible coding system when we do our own research and we should concentrate on a couple of target forms rather than a single target.

As for suggestions for Wiki, she advised us to write a couple of definitions of error correction
from today’s articles on the web. And feel free to use it as our own notebooks.
On Thursday, we should go to computer lab and work on Wiki project since Dr. Ortega will go to Japan in order to attend a conference. Plus, we should also upload bibliography on Wiki by Thursday. We will not have any classes for next week due to TBLT conference.

Saturday, September 8, 2007

Commentary Tues, September 4th, 2007 (Y. Watanabe)

We started off by talking about the importance of footnote chasing while reading the articles. Footnote chasing is a research strategy to locate key resources on a topic by searching the reference sections of a paper. It will be easier to retrieve relevant literature for your own study, if you flag footnote chased articles and make notes. From my experience conducting meta-analysis, I would also recommend making notes and keywords in the Endnote software. Accumulation of those notes and keywords in the Endnote makes graduate students’ life much easier when it comes to writing a literature review. So far, each classmate has reviewed two articles. Among the reviewed articles, we were cautioned not to cite Frank Morris’s work.

For the rest of the class, we were engaged in mini error correction task. We first listed types of teacher written feedback on students’ writing and the types of error analysis in research. The following categories were identified.

Teacher

Types of feedback:

  1. direct feedback (making correction with/without explanation)
  2. indirect feedback (marking the location and/or type/nature of the error, clarification request)
  3. metalinguistic feedback (explanation of the error)

Modes and manner of feedback:

  1. conferencing, peer response
  2. paper vs. electronic

Researcher

- Overall accuracy (nature of the error, all error noteworthy of focus)

- Specific error:

  • article
  • verb morphology (tense, aspect, subject agreement)
  • preposition


The class was divided into six groups taking the role either as a teacher (direct FB and indirect FB group) or as a researcher (overall accuracy, article error, verb morphology error, and preposition error group) to analyze the error of a students’ writing sample. In a group, we identified errors and discussed the difficulty of providing or analyzing the errors.
Summarized below are the key takeaways from the discussion:

Teacher difficulty from direct group:
  1. It’s difficult as a teacher to provide feedback without knowing what stage of draft the writing is, the learning objectives, and learners’ proficiency and background.
  2. Form vs. content
    - Making a distinction between lexical and grammar error.

    - Distinguishing local versus global error.
  3. Different teachers focused on different errors (very erratic).

Teacher difficulty from indirect group:

  1. Knowledge about the content of the writing (e.g., history in our writing sample) may be needed to accurately identify verb tense errors (e.g., past perfect). Especially personal narratives will be difficult to correct. Making corrections may change the content. Sometimes we need to ask clarification questions instead of direct correction.
  2. Uniformity of coding. People use different coding system.

Researcher difficulties

  1. Where does the error begin and end?
  2. How can you clearly classify errors? (difficulty of form versus content)
  3. It’s difficult to determine what the nature of the error is. What do you do with idiomatic errors that are grammatically correct?
  4. It will be hard to determine the overall accuracy for intermediate level students’ writing. As the learners’ sentences become more complex, the more difficult to define what the nature of the error is.

Through the mini-error correction task, I learned how difficult and time consuming it is as a teacher or as a researcher to truly understand the linguistic (local) error of students’ writing. I am curious about the decision making process of teachers and researchers’ error identification and classification.

For next Tuesday:

Read Truscott (1999) and Lyster et al.’s (1999) commentary.

Reminder for next Tuesday:

“Verb morphology error group” needs to give a short summary on the tense errors (past perfect).

Thursday, September 6, 2007

Commentary on Sep. 6

Thursday: meet at PC lab and work on wiki

We first googled "second language acquisition" and logged on to the wikipedia's SLA page.

David: Please take a look at how it is structured. Browse the wikipedia page. See what's included/missing/all the info out there.
Sangki: I'm going to show you how to edit infomation and use page history.

That went on for a few minutes as Sangi explained how it worked.

Next, we went to SLS 750 wiki homepage. there was nothing...we had to do SOMETHING with it.

Sangki taught us how to make a new page, create a link, make an external link, and then link everything together. After the lesson, we gave it a try and played with it for 10 minutes.

Then, we started creating the first page for our wiki. Brainstorming for table of contents took quite a while. Here's the list we came up with:

1. Definition of error feedback
2. Types of error feedback
oral vs. written
teacher vs. learner-initiated
implicit vs. explicit
feedback on form vs. content
feedback on oral vs. written language
offline vs. online
intensive vs. extensive
reactive vs. preactive
group ve. individualized
(David was busy typing everything in.)
It's when we were about to make the third one that we started having trouble with bullets, numbers, all that stuff. We decided to clean it up at another time.

We went back to the first one on the list: definition of error feedback, trying to create a page for it. Here's what we wrote:

Error feedback is a reaction to students' interlanguage performance.

Then, Sangki suggested everyone edit the page, add their own information to it, and make new pages, just to see what happened when people were editing it simultaneously. That kept us busy for 5 minutes. Sangki walked around and asked us NOT to change the main page.

Final goal of the day: create a new page, make a link, and save it.
Thank Sangki and David for go over the basics with us.
Now you can check out our wiki website and add your comments!

Tuesday, September 4, 2007

Yongyan & Flowerdew (2007): Reviewed by Y. Watanabe

Yongyan, L., & Flowerdew, J. (2007). Shaping Chinese
novice scientists' manuscripts for publication. Journal of Second
Language Writing, 16, 100-117.

Error correction in writing does not just happen in language classrooms but also outside the language classroom, for example, when attempting to publish research article in international journals. According to Li (2005, as cited in Yongyan & Flowerdew, 2007), many Chinese doctoral students in science programs are under pressure to publish papers in journals indexed by Science Citation Index, which are international journals predominantly published in English.

When publishing a research article, various stakeholders interact in shaping the manuscript, thus the written product in the journal is often considered as co-constructed artifact. Yongyan and Flowerdew (2007) uncover the roles of the supervisors, peers, and language professionals in 12 Chinese (English as additional language) doctoral science students’ experience submitting and publishing research articles in English. Interviews, emails, and weblogs were utilized to collect doctoral students’ perceptions of feedback from their supervisors, peers, and language professionals, as well as the supervisors’ view on the type of feedback they provide. The researchers found that although doctoral students prefer native English speakers’ feedback, due to economical and accessibility reasons, local experienced English as additional language scientists are the predominant shapers of doctoral students’ manuscripts. The study suggests that there is a need for partnership between English as academic purpose professionals and Chinese-native scientists who have experience publishing in international journals, in order to facilitate the local scholarly community.

I chose this article since my colleagues and I recently submitted manuscripts to international journals, and was curious how other junior scholars perceive, incorporate, and/or reject feedback from peers and other senior scholars. Since submitting a manuscript to a journal is a high-stake task, I am particularly interested in how junior scholars negotiate their writing with the local reviewers (supervisors, peers, etc.) and the gate keepers of the journal (the editors and the manuscript reviewers) and how they gradually acculturate into the community of practice (Lave & Wenger, 1991) of their discipline.

To my view, Yongyan and Flowerdew did not fully summarize and present the data in a convincing manner. Their interview questions in the Appendix had much more than what they have summarized and concluded. For this reason, I would not recommend this article to review in class, but there are few studies they have mentioned in their study that include more in-depth data. By quickly reviewing the reference list and reviewing the abstract of the cited articles, the following articles may be of interest to some of you who are looking at how novice writers gain access to (and enter) academic disciplinary literacy practices.

Li, Y. –Y (2007). Apprentice scholarly writing in a community of practice: An “intraview” of an NNES graduate student writing a research article. TESOL Quarterly, 41, 55-79.

Li, Y. –Y (2006). Netotiating knowledge contribution to multiple discourse communities: A doctoral student of computer science writing for publication. Journal of Second Language Writing, 15, 159-178.

Heift, feedback in CALL, and 'uptake'

Heift, T. (2004). Corrective feedback and learner uptake in CALL. ReCALL, 16, 416-431.

Being that this study was conducted several years ago, it's aim at the time was to help fill the gap in (the dearth of) research on corrective feedback and CALL (computer-assisted language learning). Participants (177 beginning, high-beginning, and intermediate students of German at three Canadian universities) engaged in online grammar activities that supplemented their regular class sessions over the duration of one semester. The online exercises involved three types of feedback: metalinguistic, metalinguistic + highlighting (of the error), and repetition + highlighting. (This last category is a little vague, since 'repetition' is actually a broad category prompt, such as "grammar", which helps students identify the type of error they committed.)

The goal of the study is to discover which type of feedback leads to higher instances of uptake, here defined as any attempt by a student to correct his or her mistake. (Note that students always had the option to skip ahead to the next exercise without making any correction whatsoever.) Results show that the metalinguistic + highlighting is "most effective at eliciting learner uptake", though not in a statistically significant way. Additionally, the two learner variables of student gender and language proficiency did not have a significant effect on the results.

OK, now that that's out of the way. This was an interesting study to read, personally, as I'm also fiddling around in this very same area. The organization of the article was clear and the statistics and charts all very comprehendable. What raises my hackles, though, is the central question this article is asking. While there is value in showing that students prefer or attend to one type of feedback over another (and only three types of feedback were studied here), in the end I wind up asking myself, "So what?" — especially when the definition of "uptake" means merely attempting to correct a mistake when the computer is telling you, 'Hey, you made mistake.'

Personally, I wanted to see what kind of long-term uptake occurred, but that was not of immediate interest to the researcher. I also kept asking myself what the value of being told 'you made a mistake with the past participle' is when students aren't asked to do anything further with that mistake other than type in something else and have the computer check the answer. Sure, it beats what is possible in a workbook, but I'm skeptical of how much real uptake is happening here. I would have much preferred to see how this kind of explicit feedback stacks up against an implicit variety where students have to judge whether the meaning of what they've said/written is interpreted by a 'listener' as what they meant to say. Hey, wait a minute: that sounds an awful lot like what I've been tinkering with myself... I just have little faith that rewriting a word because you've been told the form is wrong leads to anything substantial in the way of SLA. I may be wrong.

Should we read this for class? Probably not. It was good for me and what I'm studying, but it doesn't have a lot of class-wide appeal, I'm guessing.