Psycholinguistics
Psycholinguistics is a branch of psychology interested in how the human species acquires language and the cognitive mechanisms involved in processing linguistic information. To do this, he studies the psychological and neurological factors that enable humans to acquire and deteriorate language, use, comprehend, produce language and its cognitive and communicative functions.
Psycholinguistics was born from the studies of the French linguist Gustave Guillaume (1883-1960), which is why it was also known at the beginning of the 20th century as Guillaumism. Guillaume called his theory Psychosystem and in it he linked the linguistic elements with the psychological ones.
Origin of the term
The term psycholinguistics was coined in 1936 by Jacob Robert Kantor in his book An Objective Psychology of Grammar and began to be used in his team at Indiana University, but its use ended up becoming frequent thanks to the article "Language and psycholinguistics: a review", written in 1946 by his student Nicholas Pronko, where it was used for the first time to refer to an interdisciplinary field of study "that could be coherent", and to the title of Psycholinguistics: A Survey of Theory and Research Problems, a book published in 1954 by Charles E. Osgood and Thomas A. Sebeok.
Categories of psycholinguistic processes
This discipline analyzes any process that has to do with human communication, through the use of language (whatever it is, oral, written, etc.). Broadly speaking, the most studied psycholinguistic processes can be divided into two categories, some called encoding (language production), others called decoding (or language comprehension). Starting with the former, here we would analyze the processes that make it possible for us to be able to form grammatically correct sentences based on vocabulary and grammatical structures. These processes are called encoding.
Psycholinguistics also studies the factors that affect decoding, or in other words, the psychological structures that enable us to understand expressions, words, sentences, texts, etc. Human communication can be considered a continuous perception-understanding-production. The richness of language makes this sequence develop in various ways. Thus, depending on the visual or auditory modality of the external stimulus, the sensory stages in perception will be different. There is also variability in the production of language, we can speak, gesticulate or express ourselves with writing. Finally, access to meaning varies depending on whether the unit of information considered is a word, a sentence, or a discourse.
Generativism and Functionalism
Other areas of psycholinguistics focus on issues such as the origin of language in humans. For example, psycholinguistics deals with the study of questions such as how people learn a second language, as well as the processes of language acquisition in childhood. According to Noam Chomsky, the greatest exponent of the generativist school, humans have an innate universal grammar (an abstract concept that encompasses all human languages). The functionalists, who oppose this thesis, affirm that language is only learned through social contact. However, it is scientifically proven that every human being who does not suffer from any disease that prevents it, has the innate ability to learn languages, as long as he is exposed to them for a sufficient period. This period extends considerably after puberty. Therefore, a child can quickly learn any language, while an adult may need years to learn a second or third language. It also seems proven that the more languages one knows, the easier it is to learn another.
Jean Piaget
Although Jean Piaget (1896-1980) never dealt with writing as such, nor with the processes involved in learning the written language, his theory inspired the start of studies, investigations and research, especially on the processes that are develop in learning language in general and writing in particular; starting from the schemes and foundations of psychogenesis, since the legacy left by Piaget regarding the approach to the child constitutes the obligatory point of reference for any psychologist, educator or linguist who is interested in the development of knowledge in the field of writing, since to "understand a psychological process one must understand its genesis". Consequently, psycholinguistics originates when psychology tries to analyze the functions of language, mainly the functioning of the word (Ferreiro, 1999).
In this way, the revolutionary changes mentioned above led to the official creation of psycholinguistics, understood as «the scientific discipline whose object of study is the acquisition and use of natural languages —comprehension and production of oral and written statements — from the perspective of the underlying mental processes.”
Research techniques in psycholinguistics
The variety of existing theories, the high number of processes involved in language and the development of new technologies in research have led to a wide range of methodological techniques in psycholinguistics.
Post-the-fly methods
Tasks based on memorizing texts or answering questions about texts. Among them is the RSVP (rapid serial visual presentation)
Chronometric methods
Ongoing Techniques (Online)
It consists of recording reading times. The assumption on which they are based is that those segments of the text that require a greater number of cognitive operations or greater computational complexity will need more time to be processed, which results in reading times. Another technique of this type is eye movement tracking, which consists of recording the movements of the reader's eyes while reading the successive sentences of a text.
Decision Time Techniques
Among these techniques we can find the sentence verification technique, the lexical decision task, the naming technique, the detection task and the priming techniques.
Lexical Decision Task (LDT)
It is the most widely used task in visual lexical access studies and has shown to be very fruitful for exploiting the mental processes underlying word reading. It was used for the first time by Rubenstein in 1970. The subject must decide if the presented stimulus is a word or a non-word, which may be a morphologically possible non-word (that is, one that meets the Spanish word formation rules)., but which by chance does not exist, such as planco) or impossible (bludrnt). Currently, the use of computers has made this task much easier, since it automates the way in which the subject makes the decision (for example, by pressing one key instead of another).
Word naming task
A word is presented on the computer screen that the subject must pronounce aloud as quickly as possible.
Priming techniques
It consists of presenting two stimuli, whose beginnings are separated by a time interval. The first stimulus (prime) acts as a context for the second (target). The underlying theory posits that the first word can influence the comprehension of the second: for example, in a lexical decision task, a priming can be added to activate a semantic field in the subject's mental lexicon, and then present a non- word morphologically similar to a word belonging to the same semantic field.
Observational and descriptive methods
They consist of recording the result of language production. Thus, within these methods we find the analysis of errors in the spontaneous production of speech or the study of speech pauses.
Speech perception
Speech perception attempts to describe the mechanisms by which our brain is capable of translating an acoustic signal that varies continuously as a function of numerous parameters, into a discrete and stable linguistic representation. In the theoretical field there are two ways to solve the lack of invariance and segmentation of speech. On the one hand, the models that defend that said problem is solved at a pre-lexical level and on the other hand, the models that defend that the signal is projected more or less directly into the lexicon.
The prelexical hypothesis
Models based on the pre-lexical hypothesis presuppose that the perceptual system will use processing «windows» in which it would stabilize the information. That is, the prelexical processes would transform the acoustic signal into a (prelexical) linguistic representation so that access to the lexicon itself can be given. Within this group would be models such as the motor theory of speech perception, the invariant features model or the TRACE connectionist model.
The motor theory of speech perception
This theory postulates that the perception of speech is not performed directly from the acoustic signal, but by referring to abstract articulatory gestures. This theory has been the basic reference point of the vast majority of studies on speech perception.
The TRACE model
According to this model, a set of feature detectors would be responsible for directly identifying information in the speech signal that would correspond fairly closely to what linguists have described as distinctive features.
The Shortcut Hypothesis
The models based on this hypothesis defend that the speech signal is projected continuously in the mental lexicon, so that speech perception would be resolved in the same processing stage. These models usually solve the signal segmentation problem by assuming that the human perceptual system samples the signal every n thousandths of a second and that this information is what directly contacts the lexicon.
The Cohort Model
According to this model, designed by Marslen-Wilson, selection in the mental lexicon is primarily determined by word onset.
The theory of direct perception
Proposed by Pisoni, this theory defends that the perception of speech is not carried out through specific mechanisms of language, but from the general mechanisms of auditory perception and the mental lexicon would form part of the general episodic memory of the subject.
Visual word recognition
Operations that have to do with processing the form of the word rather than its meaning.
Theoretical models
Logogene model
The logogene model was developed by John Morton between the 1960s and 1970s, and is generally an interactive and direct model. It poses a key effect of context in word recognition, which implies a faster process compared to previous models, and the existence of incomplete input information.
The basic unit of this model is the logogen: a mechanism that accumulates sensory information from both a visual and auditory source and, when it is sufficient for a word to be available as a response, jumps the recognition threshold (the amount of testing required for the response to become available). When the stimulus received is imperfect (for example, it is interfered with by an external sound, or the word has an illegible letter), the probability of success in the given response will be directly related to the frequency of the stimulus presented. The more frequent the stimulus, the lower its recognition threshold, and consequently the faster its recognition.
Morton proposes a difference between the processing of the isolated word and that of the word in context, since in the first case it only depends on sensory information, while in the second case the context can suppress negative information. The data coming from the context as well as the data coming from the stimulus act in a combined, direct and additive way.
Search model
This model is based on a librarian metaphor: words must be searched in memory serially, like when we look up a word in a dictionary. It is an indirect, serial, autonomous model, made up of discrete stages where information flows unidirectionally.
The best-known search model is from Forster (1976). It proposes a two-stage access for word recognition, with a main file and peripheral access files. The first one is a lexicon where all the information about the words is stored, and the second ones are modules organized according to the type of sensory information entered: there is an orthographic peripheral access file, another phonological one, and another semantic-syntactic file.. Within these peripheral files, words are grouped by their frequency.
The connection between the main file and the peripheral access files is given by a pointer indicating the entry in the main file that corresponds to the stimulus properties. Forster also proposes a system of cross references, from which it is possible to access from the entry of a word to the entry of another semantically related one without going back to the main file.
The search is then carried out through a successive comparison between the stimulus, based on one of its properties, and the internal list of the peripheral file. If the stimulus is a word, the search will be terminal until the match is found; if it is a non-word, the search will be exhaustive.
The other important search model is that of Butterworth, which proposes a two-stage access model for production. In the first stage, the semantic lexicon is accessed, and in the second, the phonological lexicon is accessed, where the chains of phonemes known by the speaker will be formed.
Connectionist models
They are the most influential models today.
Other models
There are other models, such as the activation-verification model, a hybrid model that shares serial and interactive assumptions; the cohort model (based on the same model in auditory modality); and the bimodal model of interactive activation.
Lexical effects found
Lexical frequency effect
The words that appear more times in the texts and with which we have, therefore, more experience are recognized faster than those of low frequency.
Lexicality effect
It takes more time to reject a series of nonword letters than to say "yes" to a word.
Effect of length
Longer words read slower.
Priming effect
The recognition times of a word are modified if it is preceded by a context or prime (word, phrase, text) that maintains some kind of relationship with it.
Other language sciences
- Linguistics
- Neuropsychology
- Cognitive science
Contenido relacionado
Anal sex
Cuban Academy of Language
Rapid eye movement sleep