AIOU Course Code 9055-1 Solved Assignment Autumn 2021

Course: Psycholinguistics (9055)

Level: BS (English)

Assignment 1

Question 1..

Psycholinguistics is interdisciplinary in nature and is studied in a variety of fields. Identify and explain different fields of studies associated with psycholinguistics.

Psycholinguistics is the study of the mental aspects of language and speech. It is primarily concerned with the ways in which language is represented and processed in the brain.

A branch of both linguistics and psychology, psycholinguistics is part of the field of cognitive science. Adjective: psycholinguistic.

The term psycholinguistics was introduced by American psychologist Jacob Robert Kantor in his 1936 book, “An Objective Psychology of Grammar.” The term was popularized by one of Kantor’s students, Nicholas Henry Pronko, in a 1946 article “Language and Psycholinguistics: A Review.” The emergence of psycholinguistics as an academic discipline is generally linked to an influential seminar at Cornell University in 1951.

Pronunciation: si-ko-lin-GWIS-tiks

Also known as: Psychology of language

Etymology: From the Greek, “mind” + the Latin, “tongue”

On Psycholinguistics

“Psycholinguistics is the study of the mental mechanisms that make it possible for people to use language. It is a scientific discipline whose goal is a coherent theory of the way in which language is produced and understood,” says Alan Garnham in his book, “Psycholinguistics: Central Topics.”

According to David Carrol in “Psychology of Language,””At its heart, psycholinguistic work consists of two questions. One is, What knowledge of language is needed for us to use language? In a sense, we must know a language to use it, but we are not always fully aware of this knowledge…. The other primary psycholinguistic question is, What cognitive processes are involved in the ordinary use of language? By ‘ordinary use of language,’ I mean such things as understanding a lecture, reading a book, writing a letter, and holding a conversation.

By ‘cognitive processes,’ I mean processes such as perception, memory, and thinking. Although we do few things as often or as easily as speaking and listening, we will find that considerable cognitive processing is going on during those activities.”

How Language Is Done

In the book, “Contemporary Linguistics,” linguistics expert William O’Grady explains, “Psycholinguists study how word meaning, sentence meaning, and discourse meaning are computed and represented in the mind. They study how complex words and sentences are composed in speech and how they are broken down into their constituents in the acts of listening and reading. In short, psycholinguists seek to understand how language is done… In general, psycholinguistic studies have revealed that many of the concepts employed in the analysis of sound structure, word structure, and sentence structure also play a role in language processing. However, an account of language processing also requires that we understand how these linguistic concepts interact with other aspects of human processing to enable language production and comprehension.”

An Interdisciplinary Field

“Psycholinguistics… draws on ideas and knowledge from a number of associated areas, such as phonetics, semantics, and pure linguistics. There is a constant exchange of information between psycholinguists and those working in neurolinguistics, who study how language is represented in the brain. There are also close links with studies in artificial intelligence. Indeed, much of the early interest in language processing derived from the AI goals of designing computer programs that can turn speech into writing and programs that can recognize the human voice,” says John Field in “Psycholinguistics: A Resource Book for Students.”

On Psycholinguistics and Neuroimaging

According to Friedmann Pulvermüller in “Word Processing in the Brain as Revealed by Neurophysiological Imaging,””Psycholinguistics has classically focused on button press tasks and reaction time experiments from which cognitive processes are being inferred. The advent of neuroimaging opened new research perspectives for the psycholinguist as it became possible to look at the neuronal mass activity that underlies language processing. Studies of brain correlates of psycholinguistic processes can complement behavioral results, and in some cases…can lead to direct information about the basis of psycholinguistic processes.”

Learning Disabilities

Psycholinguistic Training

Psycholinguistic training is an approach to training people in processes that they are believed to be deficit in. It was at one time a major intervention approach for students with learning disabilities. Training approaches were often based on results from a very popular test called the Illinois Test of Psycholinguistic Abilities (ITPA). This test measured integrative, receptive, and expressive linguistic abilities through presenting test subjects with information through visual and auditory channels. The assumption was made in this treatment approach, as in others, that discrete psycholinguistic abilities can be measured directly and then remediated. This assumption has been the subject of intense research scrutiny throughout the 1970s and 1980s. There were those who claimed with data that psycholinguistic training was generally not effective, and others with data who refuted these claims, stating that psycholinguistic training provided discrete benefits. The claims and counterclaims become a bit confusing. However, based on a review of the existing literature by this author, it appears that at least some of the areas measured by psycholinguistic assessments can be enhanced by psycholinguistic training. Particularly in the “expressive” areas of manual expression and verbal expression, evidence appears to confirm that these areas can be improved moderately through direct training. In the other 10 subareas measured by the ITPA, it appears that the effects of training are more modest.

Despite these moderately positive findings, a number of researchers have questioned the practical utility of gains of the magnitude reported. That is, it may be possible that gains in specific psycholinguistic variables are possible. The question is, do these gains translate into important gains in functioning in other areas of a person’s life such as reading or language use? In the absence of these types of evidence, we must question whether these psycholinguistic interventions should be the highest priority for persons with learning disabilities.

Computational psycholinguistics

Computational psycholinguistics is a sub discipline of psycholinguistics, which is the scientific discipline that studies how people acquire a language and how they comprehend and produce this language. The increasing complexity of the models of human language processing which have been evolved in this discipline makes the development and evaluation of computer implemented versions of these models more and more important to understand these models and to derive predictions from them. Computational psycholinguistics is the branch of psycholinguistics that develops and uses computational models of language processing to evaluate existing models with respect to consistency and adequacy as well as to generate new hypotheses. Based on a characterization of the different tasks in human language processing, the article presents different computational models of these tasks. The architectural basis, the processing strategy, and the predictions made by the programs are described to show the merits of computer modeling in psycholinguistics

Formulaic Language Exists

Processing Shortcuts

Psycholinguistic evidence strongly suggests that we operate under tight processing restrictions when handling linguistic material. If this is so, then our tendency to use formulaic language might be the result of expediency – that is, it makes processing shortcuts possible. Conceptualizing a ‘processing shortcut’ is highly contingent on how one models psycholinguistic knowledge. However, most psycholinguists would agree that the pressure point relates to assembly rather than storage. In other words, the brain seems very able to accommodate as many lexical units as we want to store, but we are very easily thrown off course when formulating an utterance, if we try to do too many things at once.

Sources

Carroll, David. Psychology of Language. 5th ed., Thomson, 2008.

Field, John. Psycholinguistics: A Resource Book for Students. Routledge, 2003.

Garnham, Alan. Psycholinguistics: Central Topics. Methuen, 1985.

Kantor, Jacob Robert. An Objective Psychology of Grammar. Indiana University, 1936.

O’Grady, William, et al., Contemporary Linguistics: An Introduction. 4th ed., Bedford/St. Martin’s, 2001.

Pronko, Nicholas Henry. “Language and Psycholinguistics: A Review.” Psychological Bulletin, vol. 43, May 1946, pp. 189-239.

Pulvermüller, Friedmann. “Word Processing in the Brain as Revealed by Neurophysiological Imaging.” The Oxford Handbook of Psycholinguistics. Edited by M. Gareth Gaskell. Oxford University Press, 2007.

Question 2..

What is infant directed speech? What are the characteristics of infant directed speech?

Babies learn to communicate through eye contact, gestures, and affectionate touch. But when it comes to grabbing a baby’s attention — and helping a baby “crack the code” of spoken language — one particular mode of communication may be especially effective.

How to babies learn language? You might argue that they simply have a knack for it. After all, babies perform some truly amazing feats.

They listen to a sea of confused sound, and figure out that certain segments of sound are words.

They teach themselves to reproduce the speech sounds they hear — by listening, babbling, making corrections, and babbling again.

They infer the meanings of words by interacting with conversation partners and observing contingencies. If I say “wa-wa,” she hands me my drinking cup. Hmmm.

However you look at it, it’s impressive. Without textbooks or dictionaries or explicit instruction, babies acquire language.

But that doesn’t mean that babies work everything out on their own, without any help.

If you’ve ever struggled to understand a new language, you know that not every speaker is equally easy to understand. Some folks, noticing your difficulties, alter their normal speech patterns to make their meanings more obvious. Does the same thing happen for infants?

Enter Exhibit A, the phenomenon that researchers call “infant-directed speech.”

Also called “parentese,” or “motherese,” it’s a form of communication that people seem to adopt naturally when they interact with a baby.

Suddenly their vocal pitch goes up. They speak more musically, using a wider pitch range (i.e., more distance between the highest- and lowest-pitched sounds). They change the tonal color, or timbre, of their voices (Piazza et al 2017), and exaggerate their emotional tone (Kemler-Nelson et al 1989; Saint-Georges et al 2003).

They speak more slowly, and use shorter, simpler sentence structure. They tend to repeat themselves a lot, and give certain words emphasis by uttering them in isolation. Instead of saying, “look at the teddy bear,” they might call out, “bear!” (Christia and Siedl 2013; Fernald 2000).

They may also exaggerate the articulation of certain vowel sounds and position target words at the end of a sentence. Look at the BALL! (Swanson and Leonard 1994).

Not everyone does it, but it’s remarkably common. Mothers do it. Fathers do it. Children do it. People lacking experience with babies do it (Broesch and Bryant 2017; Fernald et al 1989; Jacobson et al 1983).

And this distinctive style of baby communication has been documented in a wide range of languages, including languages indigenous to

Africa and the Middle East (Arabic and Xhosa, a Bantu click language),

The Americas (Comanche)

Australia (Warlpiri)

East Asia (Cantonese, Mandarin, Korean, Japanese, and Gilyak, a Siberian language)

South Asia (Bengali, Hindi, Marathi, and Sinhala, a Sri Lankan language)

Europe (English, French, German, Italian, Latvian, and Swedish)

It depends on how you define “universal.” As we’ve alread noted, infant-directed speech isn’t practiced absolutely everywhere by everyone. Parents who are depressed or self-conscious aren’t so good at ID speech (e.g., Kaplan et al 2007). And some parents may be discouraged by cultural attitudes.

For instance, anthropologists have reported that the Kaluli of New Guinea don’t engage their babies in conversation (Sheiffelin and Ochs 1996). It’s also been reported that the Quiché Mayan speak to their babies in the same pitch that they use to address adults (Ratner and Pye 1984).

But these cases are exceptions to the rule. Yes, infant-directed speech is subject to individual differences and cultural influences. But you can say the same thing about most human behavior—including other parenting practices, like breastfeeding

According to this idea, infant-directed speech evolved to facilitate baby communication. It’s a tutorial style of speech, one designed to help babies develop social skills, forge stronger emotional attachments, and learn language.

It’s an intriguing view, especially if you consider these findings.

  1. Babies — even newborn babies — prefer communication partners who use infant-directed speech

In a classic experiment, researchers Robin Cooper and Richard Aslin presented 2-day old infants with audio recordings of adult speech.

The babies could control how long each playback lasted by turning their heads toward a loudspeaker.

In some trials, babies heard infant-directed speech. In other trials, they heard adult-directed speech.

Cooper and Aslin found that the newborns turned their heads longer in response to infant-directed speech (Cooper and Aslin 1990).

Similar experiments have been performed on older babies, with the same results. In one study, five-month-old babies showed a preference for strangers who addressed them with infant-directed speech, even after the talking had ended (Schachner and Hannon 2011). But the behavior of newborns seems especially compelling. It suggests that babies are born with an unlearned preference for infant-directed speechs

  1. Infant-directed speech is an attention-grabber

Experimental research has shown that babies’ brains pay more attention to infant-directed speech.

In one study of 3-month old infants, researchers played back recordings of adult voices to sleeping babies. In some trials, babies heard infant-directed speech. In other trials, they heard adult-directed speech. When sleeping babies listened to the baby talk, they experienced an increased blood flow to the frontal area of their brains (Saito et al 2006).

Similarly, cognitive neuroscientists have measured event-related potentials, or ERPs, in 6- and 13-month old babies as they listened to both infant-directed and adult-directed speech. The babies’ brains experienced more electrical activity when they listened to baby talk (Zaigl and Mills 2007).

Question 3..

What is language acquisition? Explain the following theories/models of acquisition: language.

Language acquisition is the process by which humans acquire the capacity to perceive, produce and use words to understand and communicate. It involves the picking up of diverse capacities including syntax, phonetics, and an extensive vocabulary. However, learning a first language is something that every normal child does successfully without much need for formal lessons. Language development is a complex and unique human quality but yet children seem to acquire language at a very rapid rate with most children’s speech being relatively grammatical by age three (Crain & Lillo-Martin, 1999).[1] Grammar, which is a set of mental rules that characterizes all of the sentences of a language, must be mastered in order to learn a language. Most children in a linguistic community seem to succeed in converging on a grammatical system equivalent to everyone else in the community with few wrong turns, which is quite remarkable considering the pitfalls and complexity of the system. By the time a child utters a first word, according to the Linguistic Society of America, he or she has already spent many months playing around with the sounds and intonations of language, [2] but there is still no one point at which all children learn to talk. Children acquire language in stages and different children reach various stages at different times, although they have one thing in common and that is that typically developing children learning the same language will follow an almost identical pattern in the sequence of stages they go through. The stages usually consist of:

cooing- 6 months- use phonemes from every language

babbling- 9 months- selectively use phonemes from their native language

one word utterances- 12 months- start using single words

telegraphic speech- 2 years- multi-word utterances that lack in function

normal speech- 5 years- almost normal developed speech

Language acquisition is a complex and unique human quality for which there is still no theory that is able to completely explain how language is attained. However most of the concepts and theories we do have explaining how native languages are acquired go back to the approaches put forward by researchers such as Skinner, Chomsky, Piaget and others. Most of the modern theories we have today have incorporated aspects of these theories into their various findings

Behaviourism

The behaviourist psychologists developed their theories while carrying out a series of experiments on  animals. They observed that rats or birds, for example, could be taught to perform various tasks by  encouraging habit-forming. Researchers rewarded desirable behaviour. This was known as positive  reinforcement. Undesirable behaviour was punished or simply not rewarded – negative  reinforcement.

The behaviourist B. F. Skinner then proposed this theory as an explanation for language acquisition in

humans. In Verbal Behaviour (1957), he stated:

“The basic processes and relations which give verbal behaviour its special characteristics are  now fairly well understood. Much of the experimental work responsible for this advance has  been carried out on other species, but the results have proved to be surprisingly free of species  restrictions. Recent work has shown that the methods can be extended to human behaviour  without serious modifications.” (cited in Lowe and Graham, 1998, p68)

Skinner suggested that a child imitates the language of its parents or carers. Successful attempts are  rewarded because an adult who recognises a word spoken by a child will praise the child and/or give it  what it is asking for. Successful utterances are therefore reinforced while unsuccessful ones are forgotten.

Limitations of Behaviourism

While there must be some truth in Skinner’s explanation, there are many objections to it.

 Language is based on a set of structures or rules, which could not be worked out simply by  imitating individual utterances. The mistakes made by children reveal that they are not simply  imitating but actively working out and applying rules. For example, a child who says “drinked”  instead of “drank” is not copying an adult but rather over-applying a rule. The child has  discovered that past tense verbs are formed by adding a /d/ or /t/ sound to the base form. The  “mistakes” occur because there are irregular verbs which do not behave in this way. Such forms  are often referred to as intelligent mistakes or virtuous errors.

 The vast majority of children go through the same stages of language acquisition. There appears  to be a definite sequence of steps. We refer to developmental milestones. Apart from certain  extreme cases (see the case of Genie), the sequence seems to be largely unaffected by the  treatment the child receives or the type of society in which s/he grows up.

 Children are often unable to repeat what an adult says, especially if the adult utterance

contains a structure the child has not yet started to use. The classic demonstration comes from  the American psycholinguist David McNeill. The structure in question here involves negating

verbs:

Child: Nobody don’t like me

Mother: No, say, “Nobody likes me.”

Child: Nobody don’t like me. (Eight repetitions of this dialogue)

Mother: No, now listen carefully: say, “Nobody likes me.” Child: Oh! Nobody don’t likes me. (McNeil in The Genesis of Language, 1966)

 Few children receive much explicit grammatical correction. Parents are more interested in  politeness and truthfulness. According to Brown, Cazden and Bellugi (1969): “It seems to be  truth value rather than well-formed syntax that chiefly governs explicit verbal reinforcement by  parents – which renders mildly paradoxical the fact that the usual product of such a training  schedule is an adult whose speech is highly grammatical but not notably truthful.” (cited in  Lowe and Graham, 1998)

 There is evidence for a critical period for language acquisition. Children who have not  acquired language by the age of about seven will never entirely catch up. The most famous  example is that of Genie, discovered in 1970 at the age of 13. She had been severely neglected,  brought up in isolation and deprived of normal human contact. Of course, she was disturbed and  underdeveloped in many ways. During subsequent attempts at rehabilitation, her carers tried to  teach her to speak. Despite some success, mainly in learning vocabulary, she never became a  fluent speaker, failing to acquire the grammatical competence of the average five-year-old.

Innateness

Noam Chomsky published a criticism of the behaviourist theory in 1957. In addition to some of the arguments listed above, he focused particularly on the impoverished language input children receive.

Adults do not typically speak in grammatically complete sentences. In addition, what the child hears is  only a small sample of language.

Chomsky concluded that children must have an inborn faculty for language acquisition. According  to this theory, the process is biologically determined – the human species has evolved a brain whose  neural circuits contain linguistic information at birth. The child’s natural predisposition to learn  language is triggered by hearing speech and the child’s brain is able to interpret what s/he hears  according to the underlying principles or structures it already contains. This natural faculty has become  known as the Language Acquisition Device (LAD). Chomsky did not suggest that an English child is born knowing anything specific about English, of course. He stated that all human languages share  common principles. (For example, they all have words for things and actions – nouns and verbs.) It is  the child’s task to establish how the specific language s/he hears expresses these underlying principles.

For example, the LAD already contains the concept of verb tense. By listening to such forms as  “worked”, “played” and “patted”, the child will form the hypothesis that the past tense verbs is  formed by adding the sound /d/, /t/ or /id/ to the base form. This, in turn, will lead to the “virtuous  errors” mentioned above. It hardly needs saying that the process is unconscious. Chomsky does not  envisage the small child lying in its cot working out grammatical rules consciously!

Usage Based Theory

The Usage-based linguistics is a linguistics approach within a broader functional/cognitive framework, that emerged since the late 1980s, and that assumes a profound relation between linguistic structure and usage.[1] It challenges the dominant focus, in 20th century linguistics (and in particular con formalism-generativism), on considering language as an isolated system removed from its use in human interaction and human cognition.[1] Rather, usage-based models posit that linguistic information is expressed via context-sensitive mental processing and mental representations, which have the cognitive ability to succinctly account for the complexity of actual language use at all levels (phonetics and phonology, morphology and syntax, pragmatics and semantics). Broadly speaking, a usage-based model of language accounts for language acquisition and processing, synchronic and diachronic patterns, and both low-level and high-level structure in language, by looking at actual language use.

 

The term usage-based was coined by Ronald Langacker in 1987.[2] Usage-based models of language have become a significant new trend in linguistics since the early 2000s.[1] Influential proponents of usage-based linguistics include Michael Tomasello, Joan Bybee and Morten Christiansen.

Together with related approaches, such as construction grammar, emergent grammar, and language as a complex adaptive system, usage-based linguistics belongs to the wider framework of evolutionary linguistics. It studies the lifespan of linguistic units (e.g. words, suffixes), arguing that they can survive language change through frequent usage or by participating in usage-based generalizations if their syntactic, semantic or pragmatic features overlap with other similar constructions.[3] There is disagreement whether the approach is different from memetics or essentially the same.[

Optimality Theory

In linguistics, the theory that surface forms of language reflect resolutions of conflicts between competing constraints (i.e., specific restrictions on the form[s] of a structure). Optimality Theory was introduced in the 1990s by linguists Alan Prince and Paul Smolensky (Optimality Theory: Constraint Interaction in Generative Grammar, 1993/2004). Though originally developed from generative phonology, the principles of Optimality Theory have also been applied in studies of syntax, morphology, pragmatics, language change, and other areas.

Optimality Theory relies on a conceptually simple but surprisingly rich notion of constraint interaction whereby the satisfaction of one constraint can be designated to take absolute priority over the satisfaction of another. The means that a grammar uses to resolve conflicts is to rank constraints in a strict domination hierarchy. Each constraint has absolute priority over all the constraints lower in the hierarchy.””[O]nce the notion of constraint-precedence is brought in from the periphery and foregrounded, it reveals itself to be of remarkably wide generality, the formal engine driving many grammatical interactions. It will follow that much that has been attributed to narrowly specific constructional rules or to highly particularized conditions is actually the responsibility of very general well-formedness constraints. In addition, a diversity of effects, previously understood in terms of the triggering or blocking of rules by constraints (or merely by special conditions), will be seen to emerge from constraint interaction.” (Alan Prince and Paul Smolensky, Optimality Theory: Constraint Interaction in Generative Grammar. Blackwell, 2004)

Native language Magnet Model

The Native Language Magnet TheorY In the last post, I discussed the research showing that children lose the ability to hear non-native sounds at 1-year of age. And as I said in the post, a video demonstrating this can be found in the Theory section at aka-kara.com. In this post, I will discuss one of the theories that explains what is happening   The Native Language Magnet Theory (NLM) (Kuhl, et al. 2008) holds that infants categorize sound patterns into a “sound map.” By 6-months, an English-speaking infant has heard hundreds of thousands of examples of the /i/ as in “daddy” and “mommy,” and NLM claims babies develop a sound map in their brains that helps them hear the /i/ sound clearly. Babies create perfect examples of sounds with a target area around each sound. These prototypes “tune” the child’s brain to the native language.

This shift from a language-general to a language-specific pattern of perception makes learning a second language more difficult. Once a sound category exists in memory, “it functions like a magnet for other sounds” (Kuhl, 2000, p. 11853). That is, the prototype attracts sounds that are similar so that they sound like the prototype itself. This is why Japanese, who do not have the prototype of the vowel of “bit” mapped in memory, tend to hear it as the vowel in “beat” which they do have mapped. This neural commitment to a learned structure interferes with the processing of information so “initial learning can alter future learning” (Kuhl, 2000, p. 11855).

Importantly, the sound map can be modified. In the next post, I will discuss what research has shown to be the most effective method.

 

References

Kuhl, P. K. (2000). A new view of language acquisition. Proceedings of the National  Academy of Science, 24, 11850-11857. http://dx.doi.org/10.1073/pnas.97.22.11850 Kuhl, P. K., Conboy, B. T., Coffey-Corina, S., Padden, D., Rivera-Gaxiola, M., & Nelson, T.(2008). Phonetic learning as a pathway to language: New data and native language magnet  theory expanded (NLM-e). Philosophic Transactions of the Royal Society B, 369, 979- 1000.

Kuhl, P. K., Tsao, F. -M., Liu, H. -M., Zhang, Y., & de Boer, B. (2001). Language/Culture/Mind/Brain: Progress at the margins between disciplines. In A. Domasio  et al. (Eds.), Unity of knowledge: The convergence of natural and human science (136-174). New York: The New York Academy of Sciences

Question 4..

Each individual has his/her own learning weaknesses and strengths resulting in personalized learning preferences Le learning styles and strategies. Explain some of these learning styles and strategies.

 

One of the most accepted understandings of learning styles is that student learning styles fall into three categories: Visual Learners, Auditory Learners and Kinesthetic Learners. These learning styles are found within educational theorist Neil Fleming’s VARK model of Student Learning. VARK is an acronym that refers to the four types of learning styles: Visual, Auditory, Reading/Writing Preference, and Kinesthetic. (The VARK model is also referred to as the VAK model, eliminating Reading/Writing as a category of preferential learning.) The VARK model acknowledges that students have different approaches to how they process information, referred to as “preferred learning modes.” The main ideas of VARK are outlined in Learning Styles Again: VARKing up the right tree! (Fleming & Baume, 2006)

Students’ preferred learning modes have significant influence on their behavior and learning.

Students’ preferred learning modes should be matched with appropriate learning strategies.

Information that is accessed through students’ use of their modality preferences shows an increase in their levels of comprehension, motivation, and metacognition.

Identifying your students as visual, auditory, reading/writing, kinesthetic, learners, and aligning your overall curriculum with these learning styles, will prove to be beneficial for your entire classroom.Keep in mind, sometimes you may find that it’s a combination of all three sensory modalities that may be the best option. Allowing students to access information in terms they are comfortable with will increase their academic confidence.

Visual

Visual learners prefer the use of images, maps, and graphic organizers to access and understand new information.

Auditory

Auditory learners best understand new content through listening and speaking in situations such as lectures and group discussions. Aural learners use repetition as a study technique and benefit from the use of mnemonic devices.

Read & Write

Students with a strong reading/writing preference learn best through words. These students may present themselves as copious note takers or avid readers, and are able to translate abstract concepts into words and essays.

 

Kinesthetic

Students who are kinesthetic learners best understand information through tactile representations of information. These students are hands-on learners and learn best through figuring things out by hand (i.e. understanding how a clock works by putting one together).

By understanding what kind of learner you and/or your students are, you can now gain a better perspective on how to implement these learning styles into your lesson plans and study techniques.

The term “learning styles” speaks to the understanding that every student learns differently. Technically, an individual’s learning style refers to the preferential way in which the student absorbs, processes, comprehends and retains information. For example, when learning how to build a clock, some students understand the process by following verbal instructions, while others have to physically manipulate the clock themselves. This notion of individualized learning styles has gained widespread recognition in education theory and classroom management strategy. Individual learning styles depend on cognitive, emotional and environmental factors, as well as one’s prior experience. In other words: everyone’s different. It is important for educators to understand the differences in their students’ learning styles, so that they can implement best practice strategies into their daily activities, curriculum and assessments. Many degree programs, specifically higher level ones like a doctorate of education, integrate different learning styles and educational obstacles directly into program curriculum.

Strategies

Referred to as SWOT (“Study Without Tears”), Flemings provides advice on how students can use their learning modalities and skills to their advantage when studying for an upcoming test or assignment.

Visual SWOT Strategies

Utilize graphic organizers such as charts, graphs, and diagrams.

Redraw your pages from memory.

Replace important words with symbols or initials.

Highlight important key terms in corresponding colors.

Aural SWOT Strategies

LPRecord your summarized notes and listen to them on tape.

Talk it out. Have a discussion with others to expand upon your understanding of a topic.

Reread your notes and/or assignment out loud.

Explain your notes to your peers/fellow “aural” learners.

Read/Write SWOT Strategies

Write, write and rewrite your words and notes.

Reword main ideas and principles to gain a deeper understanding.

Organize diagrams, charts, and graphic organizers into statements.

Kinesthetic SWOT Strategies

Use real life examples, applications and case studies in your summary to help with abstract concepts.

Redo lab experiments or projects.

Utilize pictures and photographs that illustrate your idea.

Question 5..

Define speech perception. Explain the following models of speech perception: Cohort Model, Exempler Theory.

speech perception was one of the first models developed for perceiving speech, and is one of the better known models. TRACE Model is a framework in which the primary function is to take all of the various sources of information found in speech and integrate them to identify single words. The TRACE model, founded by McClelland and Elman (1986) is based on the principles of interactive activation[1]. All components of speech (features, phonemes, and words) have their own role in creating intelligible speech, and using TRACE to unite them leads to a complete stream of speech, instead of individual components. The TRACE model is broken into two distinct components. TRACE I deals mainly with short segments of real speech, whereas TRACE II deals with identification of phonemes and words in speech.The model as a whole, consists of a very large number of units which are organized into three separate levels. Each level is comprised of a bank of detectors for distinguishing the components of that level.

TRACE Model of Human Speech Perception

Feature level – At this level, there are several banks of feature detectors. Each features has its only place in speech time, and they are organized in successive order.

Phoneme level- At this level, there is a bank of detectors for each phoneme present in the speech sounds.

Word level – At this level there is a bank of detectors for each individual word that is spoken by the speaker.

The TRACE model works in two directions. TRACE allows for either words or phonemes to be derived from a spoken message. By segmenting the individual sounds, phonemes can be determined from spoken words. By combining the phonemes, words can be created and perceived by the listener.

Motor Theory Model

This model was developed in 1967 by Liberman and colleagues. The basic principle of this model lies with the production of speech sounds in the speaker’s vocal tract. The Motor Theory proposes that a listener specifically perceives a speaker’s phonetic gestures while they are speaking. A phonetic gesture, for this model, is a representation of the speaker’s vocal tract constriction while producing a speech sound[2]. Each phonetic gesture is produced uniquely in the vocal tract. The different places of producing gestures permit the speaker to produce salient phonemes for listeners to perceive. The Motor Theory model functions by using separate embedded models within the main model. It is the interaction of these models that makes Motor Theory possible.

Human Vocal Tract: Areas of constriction and relaxation within this tract create various vocal gestures

 

Trading Relations – This is the concept that not every phonetic gesture can be directly translated and defined into acoustic terms. This means that there must be another step for interpreting the vocal gestures. Some gestures can be cognitively switched with others to make interpretation simpler. If the produced gesture is similar enough to another gesture that already has a known articulatory cause, they can be switched. The perceived gesture can be traded with the known gesture and interpretation can be achieved.

Coarticulation – This is the idea that there is variability in the aspect of gesture production. This concept indicates that there are variations in the area of articulation of vocal gestures produced by speakers. The same gesture may be able to be produced in more than one place. The phonemes within the gestures are obtained and perceived by the ability to compensate for all the variations of speech possible due to coarticulation.

Cohort Model           

Proposed in the 1980’s by Marslen-Wilson, the Cohort-Model is a representation for lexical retrieval. An individual’s lexicon is his or her mental dictionary or vocabulary of all the words he or she is familiar with. According to a study, the average individual has a lexicon of about 45,000 to 60,000 words[5]. The premise of the Cohort Model is that a listener maps novel auditory information onto words that already exist in his or her lexicon to interpret the new word. Each part of an auditory utterance can be broken down into segments. The listener pays attention to the individual segments and maps these onto pre-existing words in their lexicon. As more and more segments of the utterance are perceived by the listener, he or she can omit words from their lexicon that do not follow the same pattern.

 

Example: Grape

  1. The listener hears the /gr/ sound and begins thinking about which words he or she has in their lexicon which begin with the /gr/ sound and cancel out all of the others.
  2. /gra/ all words following this pattern are thought of, and all the rest are omitted.
  3. This patter continues until the listener has run out of speech segments and is left with a single option : Grape

The ideals behind Cohort Model have also recently been applied to technology to make internet searches more convenient and faster. Google has begun using this model to help make searching faster and easier for internet users.

Exemplar Theory      

The main premise of the Exemplar theory is very similar to the Cohort Model. Exemplar theory based on the connection between memory and previous experience with words. The Exemplar theory aims to account for the way in which a listener can remember acoustic episodes. An acoustic episode is an experience with spoken words. There has been evidence produced that demonstrates that details relating to specific audible episodes are remembered by the listeners, if the episodes are familiar to the listener[6]. It is believed that listeners may be better at recognizing previously heard words if they are repeated by the same speaker, using the same speaking rate, meaning that the episode is familiar. With the Exemplar theory, it is believed that every word leaves a unique imprint on the listener’s memory, and that this imprint is what aids a listener with remembering words. When new words enter the memory, the imprint of the new words are matched to previous ones to determine any similarities[7]. The Exemplar Theory states that as more experience is gained with lexical improvements, new words being learned or heard, the stability of the memory increases. With this lexical plasticity, the Ganong Effect comes into play. The Ganong Effect states that real-world memory traces are able to perceive much more readily than nonsense word memory[8].

Ganong Effect Example:

Soot, Boot, Root will be easier to remember due to similarity in the memory of the listener

Snoyb, Bnoyb, and Rnoyb without being similar in the memory of the listener, will be difficult to remember

Neurocomputational Model           

Kroger and colleagues (2009) worked on a speech perception model which is based on the neurophysiological and neuropsychological facts about speech[9]. The model they developed simulates what the neural pathways in various areas of the brain are involved in when speech is being produced and perceived. Using this model, brain areas in speech knowledge are obtained by training neural networks to detect speech in the cortical and sub-cortical regions of the brain. Through their research, Kroger and colleagues determined that the neurocomputational model has the capability of embedding in these brain areas important features of speech production and perception to achieve comprehension of speech Along with changing the way it was thought that the brain dealt with incoming information, the basic concept of the Dual Stream Model is that acoustic information must interfere with conceptual and motor information for the entire message to be perceived.  This combining of roles is what makes the Dual Stream Model unique and plausible as a model for speech perception.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top