Category Archives: Linguistics

Preparing Teachers to Teach Listening

Ian Britton
Ian Britton
http://www.freefoto.com/images/9910/12/9910_12_2253—Stop-Look-Listen-Sign_web.jpg

In my last post (Discovering Field’s Diagnostic Listening Instruction) I explained how I am approaching listening instruction with both a listening class and a teaching listening class using Field’s Diagnostic Listening Instruction.  In this post, I want to focus on what I do with the Teaching Listening class.  I’ll include a good deal of the materials as well as some design tensions that have arisen in the past and how I’m trying to deal with them now.

My main goal is to focus students on modifying texts (audio) and tasks to best assess for gaps in listening skills and to provide skills training to fill those gaps.  In doing so, we focus largely on Field’s Decoding and Meaning-Building Processes.

Chapters 2-4 discuss beginning, intermediate, and advanced level learners (in addition to other topics mixed into each chapter).  These chapters provide for a good launching off point in the discussion of text & task modification for diverse learners.  Students in the class have to consider learner abilities at each level (ACTFL Guidelines are a helpful framework).  They then have to analyze texts (audio) for potential difficulties that learners may encounter.  Doing this for imaginary learners is less than ideal, but this lack of authenticity is address later in the course (see below).  These analyses then inform how the texts and tasks are implemented in instruction.

The activities/lessons that arise out of these activities are rather predictable.  Students tend to focus on aspects of background knowledge, vocabulary, speaker dialect and speed, number of speakers, background noise, and so forth.  This is when the students usually have to be pushed to refer to the decoding and meaning-building processes.  This takes them out of their comfort zone (based on their own learning experiences) and requires them to think about a wide range of processes that inform listening.  Follow-up assignments that required referencing the processes list tend to show a greater variety of modifications and task-types.

Take the following example.  You have a group of largely low-level English language learners.  Through initial assessments of their listening comprehension, you have found that many are unable to distinguish certain phonemes, they have difficulty finding word boundaries (isolating individual words in multiword utterances), and they have difficulty understanding many dialects that differ noticeably from the North American dialects that they have grown used to.

Knowing this about the learners, you have to choose appropriate texts and tasks to address these gaps.  While you certainly can address more than one at a time, it might be helpful here to isolate our learning objectives.  Let’s take the word boundaries issue first.   We should be addressing this specific performance gap and the processes that can help: stress-timed rhythm, stressed and unstressed words (content vs. function), pronunciation of unstressed syllables, common features of connected speech (linking, blending, elision, etc.), and so forth.

Text: Given the objective, the focus should be more on listening to each word.  In order to do this, it would probably be best for the text to be naturally spoken by a familiar speaker (teacher) or in a familiar dialect.  The text should feature content and vocabulary/expressions that learners are largely familiar with.

Task: The task is focused on these listening processes.  Teachers can explicitly teach some aspects like the features of stress-timed rhythm or these aspects can be gleaned by exposure to the language (likely mixed with some guidance by the teacher).  The tasks, however, should be focused an a particular learning objective.  For stress-timed rhythm, students can be asked to mark all of the stressed words in phrases, sentences, or paragraphs.  That task can then quickly move into a discussion about the primacy of syllables over words in listening and pronunciation.  This then leads into discussion/tasks on the pronunciation and identification of unstressed words and syllables.  This can (and should) continue until each of the learning objectives have been addressed.

These tasks are good at focusing learners on modification and role that student variables play in instructional design. However, this is largely an empty academic task.  This year, I have the good fortune to be able to offer a little more authenticity.  Learners in the Teaching Listening class will be developing lessons for actual listening classes offered by the university and taught by me.  This is the first semester that the course has been offered and I was asked to design and implement it.  I decided to eat my own dog food and attempt to apply the principles of a diagnostic listening approach to the course (I’ll write more about that experience later).  In addition, I realized that this could be a great opportunity for the English Education students to design instruction for real learners.

This semester, the Teaching Listening students will spend much of the second half of the semester developing instruction that I will implement in my classes.  The plan is to have small groups be responsible for developing lessons that address common listening problems as diagnosed by the listening class’s midterm exam.  The students will be given access to anonymized testing and assessment data, which will guide their lesson development.  Lessons (with all materials) will be submitted to me and if I think that they would benefit the listening class, I’ll teach those materials.  I’m even considering having the English Education students run the instruction, but I’m not so sure that I’ll do that.  Anyone want to convince me either way?

That’s about it for the overview.  See below for a bunch of materials related to the class.

PowerPoints that I use in the course. The chapter presentations do contain some information and  resources not in the book.

Other course materials:

 

 

Krashen’s Natural Order Hypothesis – Order of Acquisition

As noted above, the order of acquisition for second language is not
the same as the order of acquisition for first language, but there are
some similarities. Table 2.1, from Krashen (1977), presents an average

TABLE 2.1. “Average” order of acquisition of grammatical morphemes for English
as a second language (children and adults)

Table 2.1

Notes:

1. This order is derived from an analysis of empirical studies of second language acquisition
(Krashen, 1977). Most studies show significant correlatons with the average order.

2. No claims are made about ordering relations for morphemes in the same box.

3. Many of the relationships posited here also hold for child first language acquisition, but some do
not: In general, the bound morphemes have the same relative order for first and second language
acquisition (ING, PLURAL, IR. PAST, REG. PAST, III SINGULAR, and POSSESSIVE) while
AUXILIARY and COPULA tend to be acquired relatively later in first language acquisition than
in second language acquisition.

Full of holes, but interesting.

The Chomsky School of Language

24diggsdigg

Noam Chomsky is a lot of things: cognitive scientist, philosopher, political activist and one of the fathers of modern linguistics, just to name a few. He has written more than 100 books and given lectures all over the world on topics ranging from syntax to failed states. In the infographic below, we take a look at some of his most well-known theories on language acquisition as if he were presenting them himself.


Via: Voxy Blog

This is a neat infographic. The original site has some lesson ideas for university classrooms. I so often forget about Chomsky, which is insane considering his influence in the field of linguistics. It’s good to have a reminder now and again.

MIT Scientist Captures 90,000 Hours of Video of His Son’s First Words, Graphs It | Fast Company

Media_httpimagesfastc_eyfvr

In a talk soon to grab several million views on TED.com, cognitive scientist Deb Roy Wednesday shared a remarkable experiment that hearkens back to an earlier era of science using brand-new technology. From the day he and his wife brought their son home five years ago, the family’s every movement and word was captured and tracked with a series of fisheye lenses in every room in their house. The purpose was to understand how we learn language, in context, through the words we hear.

This could be amazing. I’d love to see a write-up and the TED Talk. It’s not up yet 🙁

EDIT – The video was published (see below).  I’m not as excited about the talk as I thought I would be. Over have of it is essentially an advertisement for his new company focusing on social media analysis. However, I hope that he publishes (or someone associated with the group does so) findings of words, locations, interlocutors, and such.  Like many of the commenters are suggesting, this doesn’t seems to provide anything new theoretically; however, it can help to support (or weaken) these existing theories considering there has never been as complete (and unobtrusive) collection of data of this kind ever.

 

Patricia Kuhl: The linguistic genius of babies

via ted.com

Great video and new data on morpheme recognition (distinction) in infants. This is not a new idea. This has been rather well known for years, but the new technology allows for better measurement of this phenomenon. In short, babies are excellent at recognizing and distinguishing sounds from any language, given exposure, up until around 6-8 months. This ability falls off later.

Given the brevity of the presentation, I can’t criticize her too much, but her description of the critical period and what it means to learn a language is certainly not complete. In fact, from what we see here, it is downright misinformed. Her comment that no scientist doubts that a critical period exists (as presented on the chart) is absolutely wrong. In reality, many do.

She is talking almost entirely about sound recognition and distinction, but she uses an SLA theory on language that involves so much more. It’s always difficult to mix-in theories from different fields without operationalizing your terms. I’m going to guess that’s where the 10-minute time limit is restricting.

Myths about language learning – Nice slideshow summary from EFL Classroom 2.0


via eflclassroom.com
These are all great issues for teachers, administrators, and policymakers to consider. “Common sense” isn’t always the best approach in education.

Many of the questions are phrased in a way that could easily be either true or false, but you’ll get the idea. The great justifications will help.

Pronouncing brotherhood (via @hanbae) – dialect problems cause adjustment issues for North Korean defectors

Check out this website I found at joongangdaily.joins.com

Thanks to @10_Magazine @holterbarbour @a_ahmad and @hanbae for this resource and their discussion of it on Twitter.

I’ve heard about this problem for a long time and it’s good to have some examples of the differences.

It’s common to hear Seoulites talk/complain about dialect distinctions that, not just with North Koreans but in those from other Provences as well. I’ve long held that Koreans in general, but Seoulites in particular, have very difficult time with language variance.

There are many reasons why this might exist, if it does. One of my theories is that Koreans have not had to deal with foreigners learning and using their language in the same way that Americans, for example, have. This may be true for Americans in more isolated areas, but in large urban areas you are likely to hear/interact with non-native English speakers every day. This has resulted in better coping mechanisms for language variation.

This is purely anecdotal, but a good deal of experience in both places leads me to believe this might be true. This is not to say that all Americans are better with language variation than Koreans, but I do suggest that this is likely a cognitive skill that is developed more in areas that see more variation.

Why proper English rules OK – writer comes off like a dou….bad guy ;-)

I first realised our advantage at a conference last year. The speakers came from across northern Europe, but they all gave their talks in English – or a sort of English. Germans, Belgians and French people would stand up and, in monotones and distracting accents, read out speeches that sounded as if they’d been turned into English by computers. Sometimes the organisers begged them to speak their own languages, but they refused. Meanwhile the conference interpreters sat idle in their booths.

Each new speaker lost the audience within a minute. Yet whenever a native English-speaker opened his mouth, the audience listened. The native speakers sounded conversational, and could make jokes, add nuance. They weren’t more intelligent than the foreigners, but they sounded it, and so they were heard. Here, in microcosm, was a nascent international hierarchy: native English-speakers rule.

via ft.com

While there may be a valid point in here somewhere, this writer comes off like a jerk. I understand his point that speakers at a conference can be less effective in a second language than in their first. However, maybe he should consider that he is the one with the problem. Why can’t he understand their English?

There are times when accents and grammar can be so divergent (of course this assumes an ideal, which really differs widely) that the language is incomprehensible. However, this wasn’t really the situation that this author found himself in. He is commenting on their lack of “proper” English and inability to “connect” to the audience. Both of these are quite subjective on his part and signal his lack of ability to fill the gaps in communication as much as the presenters lack of communicative skills.

%d bloggers like this: