Bimodal Bilingualism And The Frequency-Lag Hypothesis Statement


Most of us have experienced difficulties at one level or another when speaking in a second language. Accordingly, it has been widely documented in several measures that bilinguals are less efficient when speaking in their L2 than in their L1. Compared to L1 speech, bilinguals speaking in their L2 are slower and less accurate in retrieving object-names, it takes them longer to articulate complete words and phrases, and they often speak with a more or less perceptible foreign accent (e.g., Kohnert et al., 1998; Gollan and Silverberg, 2001; Roberts et al., 2002; Gollan et al., 2007, 2008; Ivanova and Costa, 2008). A lot of research has been devoted to assess the presence of these phenomena in a broad range of tasks using both behavioral and neuroscientific measures, leading to the development of more or less detailed theoretical accounts regarding the origin of the L2 speech production disadvantage. The aim of the present article is to examine these proposals in the light of the available evidence to see whether it is possible to establish which mechanism is mainly responsible for the L2 disadvantage and at what point in time it starts to have an impact. It should be made clear that the focus of the current review is to examine what factors are important in differentiating L2 speech from L1 speech independently of other variables. Therefore, we will part from the assumption that at least part of the bilingual speech production disadvantages across speakers (e.g., low and high proficient, early and late) and phenomena (e.g., slower naming speed, decreased naming accuracy, decreased verbal fluency, etc.) have a common source, which also seems to be an implicit assumption of the theoretical accounts that we will contrast in the present review. As a consequence of this approach, we will evaluate the theoretical proposals also in their ability to account for the totality of L2 disadvantages observed in speech production. However, it should be noted that such an approach does not preclude differences across speakers or measures, nor does it imply that such differences cannot be important from other points of view.

Current Accounts of L2 Speech Production Disadvantages

To date, three different theoretical accounts have been put forward to explain L2 speech production difficulties. The first explanation relies on the general principles of frequency effects in speech production and assumes that the L2 disadvantage is a frequency effect in disguise (e.g., Gollan et al., 2008). In a seminal paper by Oldfield and Wingfield (1965), it was shown that speed of lexical retrieval is negatively correlated with word frequency, with comparatively longer naming times for low-frequency words. Gollan et al. (2008) argue that since most bilinguals use their non-dominant language less frequently than their dominant language, the links between representations of the semantic and the lexical system in L2 are weaker than in L1 (hence the use of “weaker links” to refer to this account), turning lexical representations in L2 less accessible than those of L1. Note that, while for low proficient bilinguals this explanation becomes obvious, it can also account for disadvantages in highly proficient bilinguals who use both their languages on a daily basis (e.g., Spanish–Catalan bilinguals in Cataluña or English–Spanish bilinguals in the USA). That is, even though in these communities both languages are often used, there nevertheless remain some proportional differences in the amount that each language is spoken. The weaker links account assumes that even these subtle differences in frequency of use are able to create an imbalance between the dominant and non-dominant language reflected in a processing disadvantage for the latter.

The second account is what we will refer to as the executive control account. According to such a view, the L2 disadvantage is the consequence of applying language control during speech production (e.g., Abutalebi and Green, 2007; Bialystok et al., 2008). The rationale behind this assumption is that since words from both languages of a bilingual become activated during language processing (e.g., Colomé, 2001; Thierry and Wu, 2007; Wu and Thierry, 2011), a powerful control mechanism is necessary in order to select the correct word for production while preventing interference from the non-target language (e.g., Green, 1998; Costa et al., 1999), and additional executive control resources are assumed to slow down language production. Importantly, since speaking in a weak language should require more control compared to a strong language (that is, the stronger language should always become more active), the disadvantages caused by these control mechanisms are expected to be greater in L2 compared to L1.

A final and rather specific account, which we will descriptively label as the post-lexical account, attributes any differences in the speed of naming between L1 and L2 to stages posterior to lexical access such as for instance syllabification (e.g., Indefrey, 2006; Hanulovà et al., 2011). Indeed, phenomena such as “foreign accent” (e.g., Flege and Eefting, 1987; Flege et al., 2003) provide good reasons to assume difficulties for phonological and phonetic encoding, syllabification and even motor-planning and articulation. Hanulovà et al. (2011) highlight several possibilities why this may be the case: (a) phonological encoding might be more effortful in a second language if the phonotactic constraints on syllable structure of L1 are carried over to L2 production (e.g., illegal phonotactic syllable structures in L1 might not be so in L2); (b) syllables that do not overlap across languages might be constructed on-line when speaking in L2, while in L1 syllables might be stored in a mental syllabary and thus easily available to the speaker; (c) compatible with the two other accounts, though restricted to post-lexical processes, the mechanisms responsible for the more effortful L2 processing may be explained in terms of frequency (e.g., syllable frequency, motor-program frequency, etc.) and/or the need to apply language control to avoid interference from the non-target language.

It is important to note that although these three accounts are rather different in terms of loci and/or sources of the L2 disadvantage, they are not mutually exclusive. It is evident that few bilinguals use their two languages equally often, that their two languages frequently have different post-lexical properties, and that they need to control in which language to speak. Thus, most probably all three explanations introduced above are involved (at least to some extent) in producing the L2 speech disadvantage. Nevertheless, an important endeavor is to determine whether any of the three has a more prominent role in doing so, and which processing stage or stages are affected by the bilingual disadvantage. Both weaker links and the executive control account could be implemented at any stage or all stages of processing since the mechanisms they rely on are not necessarily bound to a particular process. In turn, while the post-lexical account offers a unique locus where L2 is slowed down, the mechanism responsible is not specified. A better understanding of the extension of the bilingual disadvantage within the system as well as its source will surely help to develop more accurate models of bilingual speech production.

In what follows we will selectively and critically review the available hemodynamic, behavioral, and electrophysiological evidence with the goal of better characterizing the bilingual disadvantages both in terms of predominantly responsible mechanisms as well as the stages where these have their impact.

What Can Hemodynamic Studies Tell us about the Origin of the Bilingual Disadvantage?

Although many hemodynamic studies have reported activation differences between L1 and L2 production, very few of them coincide in the cortical regions where the differential activation is observed. Moreover, whether or not any differences are observed at all seems to depend on the type of L2 speakers that are tested (i.e., high or low proficient, early or late bilinguals). Currently the picture emerging from the neuroimaging literature is that (a) L2 speech production entails the same brain areas as L1 speech; and (b) the left inferior frontal gyrus (LIFG) is the only region showing a reliably stronger activity in L2 compared to L1 speech across all studies, but only for bilingual speakers with either low proficiency, little exposure, or late acquisition of their L2 (e.g., Indefrey, 2006).

Several interpretations have been proposed for this stronger involvement of frontal areas during L2 speech. In support of the executive control account, Abutalebi and Green (2007) argued that, since the same neural structures are used for processing both L1 and L2, areas associated with executive control (which is thought to involve the LIFG) have to be recruited more extensively in L2 to prevent interference from L1. As speakers become more proficient, the employment of this executive control network would become more or less equal between a bilingual’s two languages, hence only low proficient speakers should display increased hemodynamic brain activity in areas such as the LIFG. On the other hand, in support of the post-lexical account, Indefrey and collaborators interpreted the enhanced LIFG activation for low proficient L2 speakers in terms of non-lexical compositional processes such as syllabification (e.g., Indefrey, 2006; Hanulovà et al., 2011). Indefrey and colleagues hypothesized that the LIFG might be particularly tailored for native language speech with its specific post-lexical rules, thus being less efficient for L2 and consequently the prime suspect in causing delays. To support their interpretation the authors refer to the meta-analysis conducted by Indefrey and Levelt (2004) in which it was found that the LIFG was the only reliable active area across all overt and covert speech production studies. Since syllabification but not articulation is necessary in both overt and covert production, it was argued that the reliable activation of the LIFG in all production tasks could be indicative of syllabification processes (see also Indefrey and Levelt, 2000).

Nevertheless, one must be cautious when assigning such uniform functionality to the LIFG as both of the accounts we just discussed do, given that the LIFG seems to be a multi-functional region. For one, the LIFG appears to be involved in other operations such as syntax, the binding of linguistic information, and the selection of competing words (e.g., Thompson-Schill et al., 1997; Friederici, 2002; Hagoort, 2005; Schnur et al., 2009). Moreover, and crucially here, the LIFG also displays different activation patterns in function of word frequency, indicating that this brain area may also be associated (be it partially) with the mental lexicon (e.g., Graves et al., 2007). Converging evidence of the latter was provided by Sahin et al. (2009). Using intracranial recordings these authors found effects of lexical frequency around 200 ms, grammatical effects around 300 ms, and phonological effects around 400 ms after stimulus onset, all in the LIFG. If this area is indeed involved in all these different processes, interpreting the enhanced activity during L2 speech as either executive control or post-lexical syllabification seems premature. For such a claim to be made, it is necessary to demonstrate that the increase in activity in the LIFG associated with L2 speech is selectively present for an independent variable targeting control or post-lexical stages (while not for other variables). However, the different activation patterns of the LIFG reported for bilingual speech production stems from overall comparisons between L1 and L2 naming. In our opinion this observation can be associated with any language-related operation. In other words, the available neuroimage data, which has been used to support both the executive control and the post-lexical accounts of the L2 naming disadvantage, cannot be taken as a conclusive argument. We see no reason why, for instance, the increased activity in the prefrontal cortex for L2 naming could not be an index of reduced frequency of use, thus presumably with a first impact during lexical processing. Put differently, at present the hemodynamic data can be compatible with all three accounts of the L2 speech disadvantage.

In addition, interpreting the data stemming from fMRI studies as directly mapping onto the behavioral differences that have been observed in the literature is problematic: no consistent differences in brain activity are found for early and/or high proficient bilinguals, yet differences are found behaviorally. Assuming that any differences between L1 and L2 become smaller with the gain of L2 proficiency and exposure, the lack of hemodynamic differences might be due to the lack of temporal sensitivity of the technique: While the overall brain response during L1 and L2 speech may be quite similar for highly proficient bilinguals, subtle differences in time might not be detectable with the slow bold response. Thus, while this technique might be useful to highlight the most pronounced differences between L1 and L2 speech, it is not likely to provide us with a complete picture of the mechanisms responsible for and the loci affected by the bilingual disadvantage (but see footnote 1). We will now discuss certain behavioral and electrophysiological studies which seem to be in a better position to uncover the origin of bilingual processing differences between L2 and L1 speech production.

What Can Behavioral Studies Tell us about the Origin of the Bilingual Disadvantage?

One way of examining the locus of the L2 disadvantage is to take a closer look at its manifestations beyond naming speed. As already mentioned, the L2 disadvantage has been observed in a variety of measures in speech production. Apart from the increased reaction times in picture naming (e.g., Gollan et al., 2008; Ivanova and Costa, 2008, for a summary see Table 1 in Hanulovà et al., 2011 and for an example see Figure 1), decreased performance of L2 production compared to L1 has been demonstrated in several tasks: For example, in a timed verbal fluency task, in which bilinguals were asked to generate as many exemplars as possible of a given semantic category (e.g., fruits), bilingual speakers retrieved less category members in L2 than in L1 (Sandoval et al., 2010). If the differences between L1 and L2 occur at a post-lexical level but not within the lexicon itself, it is difficult to explain why word accessibility in general is affected. That is, from a post-lexical perspective it is expected that in L2 speech retrieving post-lexical information should be more effortful and slower than in L1 (e.g., the phonetic realization of the/z/in zebra for a speaker whose L1 lacks a voiced “s”), but it does not predict that the access to the words themselves should become impaired. Of course, the fact that the task was administrated under time-pressure might invalidate such argument: the small delay caused by post-lexical processing difficulties might result in participants producing fewer words when time is limited. Nonetheless, this explanation cannot account for another related phenomenon which has been found to be sensitive to processing difficulties in L2, namely the so called tip-of-the tongue (ToT) state (i.e., feeling of knowing an object’s name, but being unable to retrieve it immediately). When bilinguals had to retrieve names of low-frequency objects in an un-timed picture naming task, they experienced more ToT’s in L2 than in L1 (e.g., Gollan and Silverberg, 2001). In a similar vein as argued before for the verbal fluency data, it is not straightforward why post-lexical processing difficulties should result in a reduced accessibility of words. That is, if the L2 disadvantage only stems from a less efficient post-lexical processing, then production might be both quantitatively (slower) and qualitatively (less native-like) modulated but not absent. Finally, in the standardized Boston Naming Test, L2 speakers scored fewer correct responses than L1 speakers (e.g., Kohnert et al., 1998; Gollan et al., 2007), which again might reflect a reduced accessibility of words in L2 that is not easily accommodated by a post-lexical account of L2 disadvantages. It must be pointed out though that these findings by themselves are far from conclusive and alternative interpretations could be entertained. For instance, all three data patterns could be explained in terms of vocabulary size: Words we do not know in our second language cannot be retrieved at all. If this is the case, these data say little about the locus of L1–L2 processing differences. Nevertheless, and as we will now see, the fact that similar disadvantages are observed for early and highly proficient bilinguals speaking in their first and dominant language when compared to monolingual speakers, makes an interpretation associated with vocabulary size implausible.

Figure 1. Figure taken from Ivanova and Costa (2008). Overall mean picture naming latencies for the Spanish Monolinguals (Group 1), the Spanish–Catalan Bilinguals (Group 2), and the Catalan–Spanish Bilinguals (Group 3) tested in Ivanova and Costa (2008), averaged across high-frequency and low-frequency picture names. Error bars represent the SE.

Basically, all phenomena we discussed so far with respect to a hampered L2 performance are also found in L1 when comparing bilingual versus monolingual speakers (e.g., Gollan et al., 2008; Ivanova and Costa, 2008; Sadat et al., in press). Although the L2 disadvantages do not necessarily have to stem from the same source as those in L1, the correspondence between data patterns and the fact that these are modulated by the same variables (e.g., lexical frequency, cognate status; see below) opens up the possibility of a common origin. If so, this poses difficulties for an account placing differences between L1 and L2 speech solely at a post-lexical level. This is because bilinguals speaking in their dominant language do not have a foreign accent nor are there reasons to suspect that they should experience difficulties in retrieving their natively acquired language specific post-lexical rules. While it could be argued that at high levels of proficiency (or for reversed language dominance) a bilingual’s native language gets influenced by the L2 and therefore leads to a post-lexical L1 disadvantage compared to monolinguals, this is not the pattern revealed empirically. That is, the hemodynamic differences between L1 and L2 thought to be related to post-lexical processes such as syllabification and phonotactics are only reliably observed for low proficient or late bilinguals; speakers for which the weak second language should have no or only a minimal impact on L1. In line with the idea that the L1 disadvantage for bilinguals originates prior to post-lexical stages, Pyers et al. (2009) observed that bimodal English – American Sign Language bilinguals showed more ToT states than English monolinguals. Although this result does not preclude phonological processing differences between two verbal languages, it is nevertheless interesting to see that a similar disadvantage as that reported for unimodal bilinguals is found even though the non-target language cannot compete post-lexically with the target language. This finding suggests that bilingualism also hampers the processing of modality-independent representations (e.g., “shared lemmas” across modalities) and not just modality-specific representations such as phonemes and syllables. This leaves two prime candidates for allocating the origin of the bilingual disadvantage, namely the conceptual and the lexical level. Regarding the former, Gollan et al. (2005) observed that bilinguals named object pictures more slowly than monolinguals, but both groups classified the object pictures equally rapidly into categories. The authors argued that monolingual and bilingual speakers accessed the objects’ semantic information similarly, and that bilingual disadvantages in naming emerged from post-semantic processing. Taken together, the available evidence comparing L1 speech production between bilinguals and monolinguals indicates that the bilingual disadvantage initiates somewhere between the semantic and phonological level. Consequently, if these differences are of the same sort as those revealed between L1 and L2 in bilinguals, a similar lexical account should be entertained for the latter. Such a unitary account of the bilingual disadvantages merits further testing since it offers a parsimonious way of disambiguating where first and second language production differ.

Arrived at this point, it is important to clarify that differences between L1 and L2 could also be expected at later stages. If we assume that effects percolate from early to later processing stages, bilingualism will affect all levels of linguistic processing. This can be illustrated by yet another measure which has been found to be sensitive to differences between L2 and L1 speech in comparisons between bilingual and monolingual speakers; namely the durations of the actual utterance. In tasks requiring single word or noun phrase production, bilingual speakers exhibited longer articulatory durations than monolinguals. For example, Sadat et al. (in press) observed that bilinguals required more time than monolinguals for the articulation of a bare noun (“car”) or noun phrase (“the red car”) when naming pictures. This finding illustrates that post-lexical processes such as articulatory programming are also less efficient during bilingual speech production (at least when compared to monolingual speech), suggesting that bilingualism affects language processing across the board. The question then is whether this effect should be considered as indexing independently originated post-lexical processing differences, or whether it is a mere consequence of the less efficient processing at the lexical stage. Both options are indeed possible since, aside from articulatory programming and other post-lexical processes, effects in articulatory durations have been associated with lexical processes (e.g., Kello et al., 2000; Gahl, 2008; Bell et al., 2009; Hanulovà et al., 2011). Future investigations will have to clarify whether such differences are merely due to spill-over effects from processing difficulties at previous stages or whether they constitute a different and independently contributing cause of the L2 disadvantage.

Having argued that the level where bilingualism starts but does not stop to exert influences is the lexicon, let us now turn to the potential mechanisms behind this disadvantage. Two potential accounts remain that are able to explain why lexical processing (as well as that of later stages) will be harder in a second language: The weaker links account and the executive control account. One set of studies that could be informative to differentiate between these two accounts are those manipulating the degree of cross-language interference induced by the task. If such interference and the consecutive engagement of executive control resources are responsible for the bilingual disadvantage, inducing a stronger competition should result in a greater disadvantage. Contrary to this prediction, several studies have found that the bilingual disadvantage is diminished for words that bilinguals can translate to their non-dominant language compared to words that they only know in their dominant language (e.g., Gollan and Acenas, 2004; Gollan et al., 2005). This finding is the opposite of what would be predicted by an interference based account since words that are not known in the non-target language cannot compete for selection and should thus be easier to retrieve in the target language. Another piece of evidence that is hard to accommodate within an interference based model is the fact that in some studies in which bilinguals are allowed to use both of their languages, their disadvantage relative to monolinguals is attenuated (e.g., Gollan and Silverberg, 2001). Given that the possibility of using both languages should presumably lead to higher activation levels for the non-target language than in a monolingual setting, more interference should be expected. And last but not least, in tasks of language switching where competition across languages arguably is at its maximal level, the disadvantage in L2 with respect to L1 does not only disappear but is even reverted in some studies (e.g., Costa and Santesteban, 2004; Christoffels et al., 2007; Gollan and Ferreira, 2009).

Another set of studies that have aimed at discriminating between both mechanisms are those manipulating lexical frequency since it has been argued that weaker links and the executive control accounts make different predictions regarding how this variable should modulate the bilingual disadvantage. The weaker links account claims that because bilinguals have used words in each language less often than monolinguals, all words would have a slightly lower frequency value for bilinguals than for monolinguals. Due to the logarithmic relationship between lexical frequency and naming speed, this frequency lag might not have a big impact on words that are used very frequently (i.e., high-frequency words such as “car”), while words that are used very rarely (i.e., low-frequency words such as “pestle”) might become almost inaccessible. In this way, the weaker links hypothesis predicts that bilinguals should show larger frequency effects than monolinguals (i.e., a greater disadvantage for low-frequency than for high-frequency words) and these effects should be larger in the non-dominant than in the dominant language. On the contrary, it has been argued that an executive control account of the bilingual disadvantage should predict greater disadvantages for high-frequent words. The argument here is that words that are used often are assumed to reach higher levels of activation when they act as translation competitors and should thus induce more interference and need more resources of executive control.

Several studies have tested these predictions and provided us with an interesting but complex set of results. For example, when comparing bilinguals’ and monolinguals’ performance in picture naming it has been found that bilinguals’ larger naming latencies are even more pronounced for low-frequency words (e.g., Gollan et al., 2008; Ivanova and Costa, 2008; but see Sadat et al., in press and Duyck et al., 2008, in word recognition), thus confirming the predictions of the weaker links hypothesis. However, the predictions regarding the non-dominant language have not been borne out in an equally consistent manner: while Gollan et al. (2008, 2011) found the expected larger frequency effect for non-dominant language picture naming, Ivanova and Costa (2008) failed to replicate this result. On the other hand, one study has been taken to support an executive control account of the bilingual disadvantage, namely that of Sandoval et al. (2010). In a verbal fluency task, it was observed that bilinguals tended to produce more low-frequency words than monolinguals (Sandoval et al., 2010). That is, in contrast to the increased disadvantage for low-frequency words in picture naming, bilinguals spontaneously produced a proportionally higher amount of low-frequency words than monolinguals, a finding that in principle seems to support the executive control account and cannot be explained by weaker links. This would imply that frequency of use would be the mechanism responsible for the greater frequency effect in bilinguals in picture naming, while executive control would be responsible for the proportionally higher amount of low-frequency words produced in the verbal fluency task. Although it is possible that the task can have a decisive role for the mechanism behind the bilingual disadvantage, the results of Sandoval et al. (2010) require replication before jumping to such dual mechanism conclusions. Also, and aside from these contrasting findings, the manipulation of lexical frequency might not be that useful to disentangle the different accounts of the bilingual disadvantage as originally thought. This is because one could easily conceive an executive control account predicting that low-frequency words should suffer more from lexical competition than high-frequency words. That is, if we assume that there is always competition in the lexical system, weak representations (such as low-frequency words or words in one’s second language) may be more vulnerable in general to the hampering effects of such competition compared to strong representations, hence requiring more executive control resources. Thus, if we assume that the potential extra amount of interference coming from high-frequent translation words is smaller than the net amount of interference coming from all competitor words on a given representation, then weak representations (such as low-frequency words and words in the second language) should still suffer the most and call for a greater amount of executive control resources. Nevertheless, even in such a scenario, executive control would not be exclusively responsible for the bilingual disadvantage since it would be bound to the frequency values of lexical representations. Moreover, the recruitment of such executive control would not be exclusive to bilinguals since it would not be triggered by interference from translation competitors, but rather by low lexical frequency.

In sum, most of the behavioral findings at our disposal show that the bilingual disadvantage is likely to originate at some point during lexical processing and that frequency of use seems to have an important role in this phenomenon. Nevertheless, concluding that the employment of the executive control network would not have any influence at all, especially in certain tasks, would be premature since the data do not conclusively argue against such an involvement. And more generally, correlating the net result stemming from behavioral data with a particular stage of processing in time is not straightforward. Therefore, in what follows we will examine studies that compare L1 versus L2 naming employing the fine-grained temporal technique of event-related brain potentials (ERPs). Doing so might aid our understanding how these differences in time arise.

What Can the Use of ERPs Tell us about the Origin of the Bilingual Disadvantage?

In this part we will particularly focus on ERP studies which, aside from manipulating response language, also manipulated the linguistic variables of lexical frequency and cognate status. As seen above, frequency is an interesting variable to explore, since it has been shown to modulate the bilingual disadvantage. Therefore, comparing the time-course between a frequency effect and a language effect will be informative to determine (a) the onset of the bilingual disadvantage (locus), and (b) how similar, both in time and in terms of waveform morphology, the frequency and the language effect are (mechanism). Similarly, cognate status (the amount of phonological overlap between translation words) has been found to affect the bilingual disadvantage: Cognate words elicit fewer ToT states and are named faster and more accurately than non-cognate words (e.g., Costa et al., 2000; Gollan and Acenas, 2004; Kohnert, 2004). Thus, just as for lexical frequency, the comparison between the cognate effect and the language effect in time (using ERPs) should provide some useful insights regarding the locus and potentially the mechanisms producing processing differences between a bilingual’s first and second language.

Strijkers et al. (2010) report two overt picture naming experiments in which both lexical frequency and cognate status were manipulated. Early and high proficient bilinguals named the same set of pictures either in their L1 (Spanish for a group of Spanish–Catalan bilinguals) or in their L2 (Spanish for a group of Catalan–Spanish bilinguals) while the EEG was recorded simultaneously to the overt response. The authors found an early effect that was practically identical for lexical frequency and cognate status: larger amplitudes were found in a positive going waveform around 200 ms after picture onset for the more difficult conditions (i.e., low frequency and non-cognate names; see also Costa et al., 2009 for a similar finding related to semantic interference). Crucially for the present purposes, a between group comparison showed that the effect of response language (i.e., the point in time where L1 and L2 started to diverge) elicited identical electrical changes as those observed for the frequency and cognate effects and in the same time window. That is, P2 amplitudes during L2 picture naming were increased compared to L1 picture naming. This finding has important implications for the issue of localization of the bilingual disadvantage: While it is possible to argue that lexical frequency correlates with conceptual variables, for cognate status this does not apply. In addition, a time-course of 200 ms after stimulus onset seems early to reflect post-lexical processing and is more likely to reflect initial stages of lexical access or at best lexico-phonological encoding (lexeme retrieval). Therefore, the authors concluded that both lexical frequency and cognate status originate during access to the lexicon, a claim which is in line with evidence from behavioral, hemodynamic, and patient data (e.g., Navarrete et al., 2006; Almeida et al., 2007; Graves et al., 2007; Kittridge et al., 2008; Knobel et al., 2008). Note that the first electrophysiological differences between L1 and L2 naming were measurable at the P2 component just as the differences between high and low-frequency words as well as cognates and non-cognates. The fact that response language modulated the same ERP component as the variables of lexical frequency and cognate status supports the notion that differences between L1 and L2 speech production originate during lexical processes. Convergent evidence can be found in another ERP study where cognate status was manipulated within participants and response language between participants in an overt picture naming task (e.g., Christoffels et al., 2007, personal communication but also visible in their Figure 5). Finally, Strijkers et al. (in preparation) manipulated cognate status and lexical frequency in a study where the same participants named different pictures in both their languages (Spanish L1 or L2 and Catalan L1 or L2). Again the same P2 component was modulated for all three variables (i.e., response language, cognate status, and lexical frequency), alleviating concerns regarding potential variability due to the between group comparisons in the previously mentioned studies. In other words, the data collected from overt naming ERP experiments demonstrates that L1 and L2 processing start diverging from each other during the initial phases of lexical access, confirming the inferences deriving from the behavioral data.

Regarding the underlying mechanisms, it is interesting to see the similarity between the language effect and the frequency and cognate effects, respectively, in the ERPs. Low-frequency L1 words seem to behave in the same manner as high-frequency L2 words and the same pattern emerges when comparing the electrophysiological signature of cognate status between languages (see Figure 2). At first sight, this pattern of results supports a frequency based explanation of the relative difficulty to access to lexical representations in one’s second language. Nevertheless, one should be cautious here. The fact that the ERP expression of the language effect overlaps perfectly with that of the frequency and cognate effects, namely a more positive brain response for the harder condition (low frequency, non-cognates, L2), is not inconsistent with the mechanism behind these effects being driven by the amount of executive control applied to the lexicon. It all depends on what this P2 modulation indexes. Regarding the functional significance of the P2 component, in a recent monolingual study Strijkers et al. (2011) demonstrated that lexical modulations at the P2 seem to be elicited only when there is a conscious intention to speak. The authors argued that this particular P2 (which they labeled descriptively the production P2, pP2) is engendered by the interaction between goal-directed top-down processes such as attention with the level of activation of items within the lexicon. If we use this functional characterization of the pP2 to interpret the language effect in the previously discussed ERP results, both weaker links between concepts and words in L2 compared to L1 and a stronger recruitment of executive control (although understood here as proactive attention) during L2 speech compared to L1 speech may contribute to the bilingual disadvantage as indexed by the pP2. That is, processing differences between L1 and L2 during lexicalization would emerge because L2 representations have lower levels of activation overall within the lexicon compared to L1 representations. At the same time, the lower activation level of L2 words will call for more executive control (understood here as proactive attentional resources) to retrieve words in L2 compared to L1. Thus, bilinguals would enhance a priori the lexical representations related to the target language (see also Wu and Thierry, 2011), and this top-down enhancement would be greater for less accessible representations such as low-frequency words or words in the second language. In such a scenario, the main source of the bilingual disadvantage would be frequency of use, since the additional resources of attention during L2 speech are invoked exactly to compensate for the worse accessibility in L2 and thus speed up rather than slow down the behavioral performance. It should be noted that the type of executive control involved would be rather different from that portrayed by Abutalebi and Green (2007) according to whom the extra involvement of the executive control network is directly related to bilingualism (i.e., the purpose of executive control is to reactively resolve interference from translation words, representations specific to bilingual speakers). In contrast, here we propose that the crucial factor for determining the degree of executive control engagement is the strength of a certain representation, a variable which is general to all speakers and not specific to bilingualism. That is, bilingualism would exert an indirect influence on the extra-linguistic processes: Through the division of speech between two languages and the subsequent lower overall strength of lexical representations for a bilingual (especially in the non-dominant language), preparing the system for speech will require more, but not different, proactive attentional resources in an L2 compared to an L1, or for a bilingual compared to a monolingual speaker. Note that this hypothesis regarding the cognitive source of the pP2 requires further testing and that it does not preclude that additional resources of reactive executive control are engaged later on. The main point we wish to make here is that the more simple solutions should be thoroughly considered before embracing theories involving qualitative differences between monolingual and bilingual language processing.

Figure 2. Figure taken from Strijkers et al. (2010). (A) Shows low-frequency and high-frequency ERPs compared with non-cognate and cognate ERPs at Cz in Experiment 1 (right) and Experiment 2 (left). The frequency ERPs are represented by a full gray and black line. The cognate ERPs are represented by a dotted gray and black line. Negativity is plotted upward. (B) Shows a between experiments comparison of the low- and high-frequency ERPs (left), non-cognate and cognate ERPs (right), and overall naming in L1 and naming in L2 ERPs (under). Negativity is plotted upward.

In sum, reviewing the electrophysiological evidence provides good grounds to believe that the bilingual disadvantage originates during lexicalization and that a reduced frequency of use is the direct cause of the hampering effects on linguistic processing associated with bilingualism. We have also seen that an indirect and additional role of executive control is possible, although some of the available ERP evidence opens up the possibility that this executive control consists in a speaker-general proactive enhancement of weak lexical representations.


The overall picture of the origin of the bilingual disadvantage that emerges when combining the different pieces of evidence that we have briefly reviewed in the present article is the following: both the behavioral and the available ERP evidence indicates an early lexical origin of the processing differences between L1 and L2, although these differences seem to persist until the very moment of articulation. Furthermore, the simplest explanation for the bilingual disadvantage relates to reduced frequency of use whereas the engagement of additional resources of executive control are likely to attenuate rather than increase lexical retrieval difficulties. Frequency being a variable that affects all speakers, this conclusion entails that speech production differences between L1 and L2 and between monolinguals and bilinguals are essentially a matter of quantity: In this case, “the more the better.”

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


This work was supported by grants from the Spanish government (PSI2008-01191, Consolider Ingenio 2010 CSD2007-00012) and the Catalan government (Consolidado SGR 2009-1521), a predoctoral grant from the Catalan government (FI) to Elin Runnqvist and predoctoral grants from the Spanish government (FPU) to Kristof Strijkers and Jasmin Sadat.



Abutalebi, J., Annoni, J. M., Zimine, I., Pegna, A. J., Seghier, M. L., Hannelore, L. J., Lazeyras, F., Cappa, S. F., and Khateb, A. (2008). Language control and lexical competition in bilinguals: an event-related fMRI study. Cereb. Cortex 18,1496–1505.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Abutalebi, J., and Green, D. (2007). Bilingual language production: the neurocognition of language representation and control. J. Neurolinguistics 20, 242–275.

CrossRef Full Text

Almeida, J., Knobel, M., Finkbeiner, M., and Caramazza, A. (2007). The locus of the frequency effect in picture naming: when recognizing is not enough. Psychon. Bull. Rev. 14, 177–1182.

CrossRef Full Text

Bell, A., Brenier, J., Gregory, M., Girand, C., and Jurafsky, D. (2009). Predictability effects on durations of content and function words in conversational English. J. Mem. Lang. 60, 92–111.

CrossRef Full Text

Colomé, A. (2001). Lexical activation in bilinguals’ speech production: language-specific or language-independent? J. Mem. Lang. 45, 721–736.

CrossRef Full Text

Costa, A., Caramazza, A., and Sebastian-Galles, N. (2000). The cognate facilitation effect: implications for models of lexical access. J. Exp. Psychol. Learn. Mem. Cogn. 26, 1283–1296.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Costa, A., Miozzo, M., and Caramazza, A. (1999). Lexical selection in bilinguals: do words in the bilingual’s two lexicons compete for selection? J. Mem. Lang. 41, 365–397.

CrossRef Full Text

Costa, A., and Santesteban, M. (2004). Lexical access in bilingual speech production: evidence from language switching in highly-proficient bilinguals and L2 learners. J. Mem. Lang. 50, 491–511.

CrossRef Full Text

Costa, A., Strijkers, K., Martin, C., and Thierry, G. (2009). The time course of word retrieval revealed by event-related brain potentials during overt speech. Proc. Natl. Acad. Sci. U.S.A. 106, 21442–21446.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Duyck, W., Vanderelst, D., Desmet, T., and Hartsuiker, R. J. (2008). The frequency-effect in second-language visual word recognition. Psychono. Bull. Rev. 15, 850–855.

CrossRef Full Text

Flege, J. E., and Eefting, W. (1987). Cross-language switching in stop consonant perception and production by Dutch speakers of English. Speech Commun. 6, 185–202.

CrossRef Full Text

Flege, J. E., Schirru, C., and MacKay, I. R. A. (2003). Interaction between the native and second language phonetic subsystems. Speech Commun. 40, 467–491.

CrossRef Full Text

Gahl, S. (2008). Time and thyme are not homophones: the effect of lemma frequency on word durations in spontaneous speech. Language 84, 474–496.

CrossRef Full Text

Gollan, T. H., and Acenas, L. A. (2004). What is a TOT? Cognate and translation effects on tip-of-the-tongue states in Spanish–English and Tagalog–English bilinguals. J. Exp. Psychol. Learn. Mem. Cogn. 30, 246–269.

Pubmed Abstract | Pubmed Full Text

More than a decade of research has established that spoken language bilingualism entails subtle disadvantages in lexical retrieval. Specifically, when compared with their monolingual peers, bilinguals who know two spoken languages have more tip-of-the-tongue (TOT) retrieval failures (Gollan & Acenas, 2004; Gollan & Silverberg, 2001), have reduced category fluency (Gollan, Montoya, & Werner, 2002; Portocarrero, Burright, & Donovick, 2007; Rosselli et al., 2000), name pictures more slowly (Gollan, Montoya, Fennema-Notestine, & Morris, 2005; Gollan, Montoya, Cera, & Sandoval, 2008; Ivanova & Costa, 2008), and name fewer pictures correctly on standardized naming tests such as the Boston Naming Test (Kohnert, Hernandez, & Bates, 1998; Roberts, Garcia, Desrochers, & Hernandez, 2002; Gollan, Fennema-Notestine, Montoya, & Jernigan, 2007). Crucially, bilingual naming disadvantages are observed even when bilinguals are tested exclusively in their dominant language (e.g., Gollan & Acenas, 2004; Gollan, Montoya et al., 2005; and first-learned language; Ivanova & Costa, 2008). However, bilinguals are not disadvantaged on all tasks. For example, bilinguals classify pictures (as either human-made or natural kinds) as quickly as monolinguals (Gollan, Montoya et al., 2005), and bilingual disadvantages also do not generalize to all language tasks (e.g., bilinguals are not disadvantaged for production of proper names; Gollan, Bonanni, & Montoya, 2005). In some nonlinguistic tasks, bilinguals exhibit significant processing advantages. For example, spoken language bilinguals are faster to resolve conflict between competing responses (see Bialystok, Craik, Green, & Gollan, 2009, for review).

Gollan and colleagues propose a frequency-lag account of slowed naming in bilinguals (also known as the “weaker links” account), which draws on the observation that by virtue of speaking each of the languages they know only part of the time, bilinguals necessarily speak each of their languages less often than do monolinguals (Gollan et al., 2008, 2011). Because bilinguals use words in each language less frequently, lexical representations in both languages will have accumulated less practice relative to monolinguals. Therefore, bilinguals are hypothesized to exhibit slower lexical retrieval times because of the same mechanism that leads to frequency effects in monolinguals, that is, words that are used frequently are more accessible than words that are used infrequently (e.g., Forster & Chambers, 1973; Oldfield & Wingfield, 1965). Because small differences in frequency of use can have profound effects on lexical accessibility at the lower end of the frequency range (Murray & Forster, 2004), the frequency-lag hypothesis predicts that the bilingual disadvantage should be particularly large for low-frequency words (and relatively small for high-frequency words). Stated differently, bilinguals should show larger frequency effects than monolinguals. Similarly, within bilinguals’ two languages, because the nondominant language is used less frequently than the dominant language, the frequency-lag hypothesis also predicts that bilinguals should exhibit larger frequency effects in their nondominant than in their dominant language. Several studies have confirmed this prediction for visual word recognition (e.g., Duyck, Vanderelst, Desmet, & Hartsuiker, 2008) and picture naming (Gollan et al., 2008, 2011; Ivanova & Costa, 2008).

In this study, we investigated whether the frequency-lag hypothesis holds for bimodal bilinguals who have acquired a spoken language, English, and a signed language, American Sign Language (ASL). Although deaf ASL signers are bilingual in English (to varying degrees), we reserve the term “bimodal bilingual” for hearing ASL–English bilinguals who acquired spoken English primarily through audition and without special training. Using a picture-naming task, we explored whether bimodal bilinguals exhibit slower lexical retrieval times for spoken words and a larger frequency effect as compared with English-speaking monolinguals. Unlike spoken language bilinguals (i.e., unimodal bilinguals), bimodal bilinguals do not necessarily divide their language use between two languages because they can—and often do—code-blend, that is, produce ASL signs and English words at the same time (Bishop, 2006; Emmorey, Borinstein, Thompson, & Gollan, 2008; Petitto et al., 2001). Code-blending is a form of language mixing in which, typically, one or more ASL signs accompany an English utterance (in this case, English is the Matrix language; see Emmorey et al., 2008, for discussion). Recently, Pyers, Gollan, and Emmorey (2009) found that bimodal bilinguals exhibited more lexical retrieval failures (TOTs) than monolingual English speakers and the same TOT rate as Spanish–English bilinguals, suggesting that bimodal bilinguals are affected by frequency lag. However, bimodal bilinguals also exhibited slightly better lexical retrieval success than the unimodal bilinguals on other measures (e.g., they produced more correct responses and reported fewer negative or “false” TOTs). Pyers et al. (2009) attributed this in-between pattern of lexical retrieval success for bimodal bilinguals to more frequent use of English, possibly due to the unique ability to code-blend. Thus, the predictions of the frequency-lag hypothesis may not hold for English for bimodal bilinguals.

In addition, we investigated whether bimodal bilinguals exhibit slower lexical retrieval times for ASL signs and a larger ASL frequency effect as compared with deaf signers who use ASL as their primary language. Following Emmorey et al. (2008), we suggest that ASL is the nondominant language for the great majority of bimodal bilinguals, even for Children of Deaf Adults (CODAs) who acquire ASL from birth within deaf signing families. Although CODAs may be ASL dominant as young children, English rapidly becomes the dominant language due to immersion in an English-speaking environment outside the home. Such switched dominance also occurs for many spoken language bilinguals living in the United States (e.g., for Spanish–English bilinguals; Kohnert, Bates, & Hernandez, 1999). In contrast, ASL can be argued to be the dominant language for deaf signers, who sign ASL more often than they speak English. If these assumptions about language dominance are correct, the frequency-lag hypothesis makes the following predictions:

  1. Bimodal bilinguals will exhibit slower ASL-naming times than deaf signers and slower English naming times than monolingual English speakers.

  2. Bimodal bilinguals will exhibit a larger ASL frequency effect than deaf ASL signers and a larger English frequency effect than monolingual English speakers.

  3. Bimodal bilinguals will exhibit a larger frequency effect for ASL than for English.

  4. Late bilinguals (those who acquired ASL in adulthood) will exhibit the largest ASL frequency effect.

Unfortunately, no large-scale sign frequency corpora (i.e., with millions of tokens in the corpus) are currently available for ASL or for any sign language to our knowledge. Psycholinguistic research has relied on familiarity ratings by signers to estimate lexical frequency (e.g., Carreiras, Gutiérrez-Sigut, Baquero, & Corina, 2008; Emmorey, 1991). For spoken language, familiarity ratings are highly correlated with corpora-based frequency counts (Gilhooly & Logie, 1980), are consistent across different groups of subjects (Balota, Pilotti, & Cortese, 2001), and are sometimes better predictors of lexical decision latencies than objective frequency measures (Gernsbacher 1984; Gordon, 1985). Therefore, we relied on familiarity ratings as a measure of lexical frequency in ASL.

Recently, Johnston (2012) questioned whether familiarity ratings accurately reflect the frequency of use or occurrence of lexical signs. He found only a partial overlap between sign-familiarity ratings for British Sign Language (BSL; from Vinson, Cormier, Denmark, & Schembri, 2008) and the frequency ranking in the Australian Sign Language (Auslan) Archive and Corpus (Johnston, 2008; note that BSL and Auslan are argued to be dialects of the same language; Johnston, 2003). After adjusting for glossing differences between BSL and Auslan, Johnston (2012) found that of the 83 BSL signs with a very high-familiarity rating (6 or 7 on a 7-point scale), only 38 (12.5%) appeared in the 300 most frequent lexical signs in the Auslan corpus and only 14 (4.7%) appeared in the top 100 signs. Johnston (2012, p. 187) suggested that a subjective measure of familiarity for a lexical sign may be very high even though it may actually be a low-frequency sign because “it is very citable and ‘memorable’ (e.g., SCISSORS, ELEPHANT, KANGAROO), perhaps because there are relatively few other candidates contesting for recognition, that is, due to apparently modest lexical inventories.” However, if the above predictions of the frequency-lag hypothesis hold for ASL, then it will suggest that familiarity ratings are indeed an accurate reflection of frequency of use for signed languages and will suggest that the lack of overlap found by Johnston (2012) may be due to deficiencies in the corpus data—for example, limited tokens (thousands rather than millions), genre and register biases, etc..

To test these predictions, we compared English picture-naming times for bimodal bilinguals with those of English monolingual speakers and ASL picture-naming times with those of deaf ASL signers. In addition to naming pictures in English and in ASL, the bimodal bilinguals were also asked to name pictures with a code-blend (i.e., producing an ASL sign and an English word at the same time). The code-blend comparisons are reported separately in Emmorey, Petrich, and Gollan (2012). Here we report the group comparisons for lexical retrieval times and lexical frequency effects when producing English words or ASL signs.



A total of 40 hearing ASL–English bilinguals (27 female), 28 deaf ASL signers (18 female), and 21 monolingual English speakers (14 female) participated. We included both early ASL–English bilinguals (CODAs) and late bilinguals who learned ASL through instruction and immersion in the Deaf community. Two bilinguals (one early and one late) and three deaf signers were eliminated from the analyses because of high rates of fingerspelled, rather than signed responses (>2 SD above the group mean).

Table 1 provides participant characteristics obtained from a language history and background questionnaire. The early bimodal bilinguals (N = 18) were exposed to ASL from birth, had at least one deaf signing parent, and eight were professional interpreters. The late bimodal bilinguals (N = 20) learned ASL after age 6 (mean = 16 years; range: 6–26 years), and 13 were professional interpreters. All bimodal bilinguals used ASL and English on a daily basis. Self-ratings of ASL proficiency (1 = “not fluent” and 7 = “very fluent”) were significantly different across participant groups, F(2, 63) = 4.632, p = .014, ηp2 = .14, with lower ratings for the late bilinguals (mean = 5.7, SD = .8) as compared with both the deaf signers (mean = 6.5, SD = 0.7), Tukey’s HSD, p = .011, and the early bilinguals (mean = 6.4, SD = 0.8), Tukey’s HSD, p = .056. In addition, the bimodal bilinguals rated their proficiency in English as higher than in ASL, F(1,26) = 24.451, MSE = .469, p < .001, ηp2 = .48. There was also an interaction between participant group and language rating, F(1,26) = 5.181, MSE = .469, p = .031, ηp2 = .17, such that the early and late bilinguals did not differ in their English proficiency ratings, but the late bilinguals rated their ASL proficiency lower than did the early bilinguals, t(35) = 2.397, p = .022.

Table 1

Means and standard deviations for participant characteristics

Among the deaf signers, 21 were native signers exposed to ASL from birth and four were near-native signers who learned ASL in early childhood (before age 7). The monolinguals were all native English speakers who had not been regularly exposed to more than one language before age six and had not completed more than four semesters of foreign language study (the minimum University requirement). Self-ratings of English proficiency were not collected for these participants.


Participants named 120 line drawings of objects taken from the CRL International Picture Naming Project (Bates et al., 2003; Székely et al., 2003). Bimodal bilinguals named 40 pictures in ASL only, 40 pictures in English only, and 40 pictures with an ASL–English code-blend (data from the code-blend condition are reported in Emmorey et al., 2012). The pictures were counter-balanced across participants, such that all pictures were named in each language condition, but no participant saw the same picture twice. The deaf participants named all 120 pictures in ASL, and the hearing English monolingual speakers named all pictures in English. For English, the pictures all had good name agreement based on Bates et al. (2003): mean percentage of target response = 91% (SD = 13%). For ASL, the pictures were judged by two native deaf signers to be named with lexical signs (English translation equivalents), rather than by fingerspelling, compound signs, or phrasal descriptions, and these signs were also considered unlikely to exhibit a high degree of regional variation. Half of the pictures had low-frequency English names (mean ln-transformed CELEX frequency = 1.79, SD = 0.69) and half had high-frequency names (mean = 4.04, SD = 0.74). Our lab maintains a database of familiarity ratings for ASL signs based on a scale of 1 (very infrequent) to 7 (very frequent), with each sign rated by at least 8 deaf signers (the average number of raters per sign was 15). The mean ASL sign-familiarity rating for the ASL translations of the low-frequency words was 2.93 (SD = 0.97) and 3.87 (SD = 1.23) for the high-frequency words. For ease of exposition, we will refer to these sets as low- and high-frequency signs, rather than as low- and high-familiarity signs.


Pictures were presented using Psyscope Build 46 (Cohen, MacWhinney, Flatt, & Provost 1993) on a Macintosh PowerBook G4 computer with a 15-inch screen. English naming times were recorded using a microphone connected to a Psyscope response box. ASL-naming times were recorded using a pressure release key (triggered by lifting the hand) that was also connected to the Psyscope response box. Participants initiated each trial by pressing the space bar. Each trial began with a 1,000-ms presentation of a central fixation point “+” that was immediately replaced by the picture. The picture disappeared when the voice-key (for English) or the release-key (for ASL) triggered. All testing sessions were videotaped.

Participants were instructed to name the pictures as quickly and accurately as possible. Bimodal bilinguals named the pictures in three blocks: English only, ASL only, or ASL and English simultaneously (results from the last condition are presented in Emmorey et al., 2012). The order of language blocks was counter-balanced across participants. Within each block, half of the pictures had low-frequency and half had high-frequency words/signs, randomized within each block. Six practice items preceded each naming condition.


Reaction times (RTs) that were 2 SDs above or below the mean for each participant for each language were eliminated from the RT analyses. This procedure eliminated 5.3% of the data for the early bilinguals, 5.6% for late bilinguals, 4.0% for deaf signers, and 4.5% for English monolinguals.

English responses in which the participant produced a vocal hesitation (e.g., “um”) or in which the voice-key was not initially triggered were eliminated from the RT analysis, but were included in the error analysis. ASL responses in which the participant paused or produced a manual hesitation gesture (e.g., UM in ASL) after lifting their hand from the response key were also eliminated from the RT analysis, but were included in the error analysis. Occasionally, a signer produced a target sign (e.g., SHOULDER) with their nondominant hand after lifting their dominant hand from the response key; such a response was considered correct, but was not included in the RT analysis. These procedures eliminated 2.4% of the English data and 0.5% of the ASL data.

Only correct responses were included in the RT analyses. Responses that were acceptable variants of the intended target name (e.g., Oreo instead of cookie, COAT instead of JACKET, or fingerspelled F-O-O-T instead of the sign FOOT) were considered correct and were included in both the error and RT analyses. Fingerspelled responses were not excluded from the analysis because (a) fingerspelled signs constitute a nontrivial part of the ASL lexicon (Brentari & Padden, 2001; Padden, 1998) and (b) fingerspelled signs are in fact the correct response for some items because either the participant always fingerspells that name or the lack of context in the picture-naming task promotes use of a fingerspelled name over the ASL sign (e.g., the names of body parts were often fingerspelled). The mean percent of fingerspelled responses was 15% for the early bilinguals, 15% for the late bilinguals, and 9% for the deaf signers. 1 Nonresponses and “I don’t know” responses were considered errors.

We did not directly compare RTs for ASL and English due to the confounding effects of manual versus vocal articulation. For each language, we conducted a 2×3 analysis of variance (ANOVA) with frequency (high, low) and participant group as the independent variables. Reaction time and error rate were the dependent variables. We report ANOVAs for both participant means (F1; collapsing across items) and item means (F2; collapsing across participants).


Figure 1 presents the mean RT and error rate data for ASL responses. Reaction times were significantly faster for high- than for low-frequency signs, F1(1,59) = 27.571, MSE = 20,386, p < .001, ηp2 = .32; F2(1,117) = 16.995, MSE = 122,775, p < .001, ηp2 = .13. There was also a main effect of participant group, F1(2,59) = 5.258, MSE = 202,694, p = .008, ηp2 = .15; F2(1,117) = 111.886, MSE = 52,674, p < .001, ηp2 = .49. Post hoc Tukey’s HSD tests showed that the deaf participants named pictures significantly more quickly than both the early and late bilinguals (both p values <.05), but the early and late bilinguals did not differ from each other in RT (p = .974). Crucially, there was also a significant interaction between participant group and sign frequency, such that bilinguals exhibited larger frequency effects than deaf signers, F1(2,59) = 4.891, MSE =20,386, p = .011, ηp2 = .14; F2(1,117) = 13.799, MSE = 52,674, p = .004, ηp2 = .11.

Figure 1

(A) ASL naming latencies and (B) error rates are greater for bimodal bilinguals than for deaf signers and for low-frequency signs than for high-frequency signs. Error bars indicate standard error of the mean. ASL = American Sign Language; RT = reaction...

As illustrated in Figure 2A, the frequency effect was smallest for those most proficient in ASL—that is, the ASL-dominant deaf signers and largest for the least proficient in ASL—that is, the English-dominant late bilinguals, as we predicted. The size of the frequency effect was calculated for each participant as follows: [low-frequency mean RT − high-frequency mean RT]/total mean RT. The proportionally adjusted frequency effects reveal that the ASL frequency effect was more than 3 times as large in early bilinguals (mean = 12.8%) and late bilinguals (mean = 15.6%) as compared with deaf signers (mean = 3.8%), t(40) = 3.788, p < .001 and t(42) = 3.773, p < .001, respectively. Although numerically in the predicted direction, the size of the ASL frequency effect was not significantly larger for late bilinguals as compared with early bilinguals, t(36) < 1.

Figure 2

The size of the ASL frequency effect for (A) naming latencies and (B) error rates is larger for bimodal bilinguals than for deaf signers. Error bars indicate standard error of the mean. ASL = American Sign Language; RT = reaction time.

The results for error rates generally mirrored those for reaction time. Error rates were significantly lower for high- than for low-frequency signs, F1(1,59) = 68.773, MSE = .003 , p < .001, ηp2 = .54; F2(1,117) = 15.504, MSE = .032, p < .001, ηp2 = .12, and there was a main effect of participant group, F1(2,59) = 4.102, MSE = .004 , p = .021, ηp2 = .12; F2(1,117) = 6.242, MSE = .010, p = .014, ηp2 = .05. Post hoc Tukey’s HSD tests revealed that deaf signers had significantly lower error rates than the late bilinguals (p = .016), and the early bilinguals did not differ from either the deaf participants or the late bilinguals (both p values >.285). Again, there was a significant interaction between frequency and participant group—F1(2,59) = 3.449, MSE = .003, p = .038, ηp2 = .10; F2(1,117) = 3.708, MSE = .010, p = .057, ηp2 = .03.

As shown in Figure 2B, the size of the ASL frequency effect for accuracy mirrored that for response time. Frequency effect size was calculated for each participant as follows: [percent correct for low-frequency signs − percent correct for high-frequency signs]/total percent correct. The ASL frequency effect was greater for early bilinguals (mean = 9.1%) and late bilinguals (mean = 10.9%) as compared with the deaf signers (mean = 4.9%), t(40) = 1.919, p = .067 and t(42) = 3.075, p < .005, respectively. The size of the ASL frequency effect did not differ for the early and late bilinguals, t(36) < 1.


Figure 3 presents the mean RTs and error rates for picture naming in English. All speakers named pictures with high-frequency names more quickly than pictures with low-frequency names, F1(1,56) = 20.785, MSE = 3,364 , p < .001, ηp2 = .27; F2(1,117) = 3.850, MSE = 77,969, p = .052, ηp2 = .03. Naming times did not differ significantly across groups with the subject analysis, F1(2,56) = 1.044, MSE = 366,263, p = 0.359, ηp2 = .04, but the items analysis was significant, F2(1,117) = 41.542, MSE = 22,416, p < .001, ηp2 = .26. By items, late bilinguals were faster than early bilinguals, t(118) = 4.517, p < .001, and monolinguals, t(119) = 6.499, p < .001. 2 There was no interaction between frequency and participant group for English, F1(2,56) = 1.184, MSE = 3,364, p = .314, ηp2 = .04; F2(1,117) = 0.018, MSE = 22,416, p = .895, ηp2 = .00.

Figure 3

(A) English naming latencies and (B) error rates are greater for low-frequency words than for high-frequency words, but lexical frequency does not interact with participant group. Error bars indicate standard error of the mean. ASL = American Sign Language;...

For error rate, there was a significant main effect of frequency by subjects, but not by items: error rates were lower for high-frequency than for low-frequency words, F1(1,56) = 17.086, MSE = .002 , p < .001, ηp2 = .23; F2(1,117) = 2.212, MSE = .025, p = .140, ηp2 = .02. There were no significant differences in error rate across participant groups, F1(2,56) = 1.938, MSE = .003 , p = .154, ηp2 = .07; F2(1,117) = 0.855, MSE = .004, p = .357, ηp2 = .01, and frequency did not interact with participant group, F1(2,56) = 1.169, MSE = .002, p = .318, ηp2 = .04; F2(1,117) = 0.332, MSE = .004, p = .565, ηp2 = .003.

The Size of the Frequency Effect in ASL Versus English

For the bimodal bilinguals, we compared the size of the frequency effect for ASL and for English. As predicted, the size of the frequency effect was larger for ASL than English for RT: 12.8% versus 3.6% for early bilinguals and 15.6% versus 6.0% for late bilinguals, t(35) = 3.021, p = .005, and t(39) = 2.987, p = .005, respectively. Similarly for error rate, the size of the frequency effect was larger for ASL than English: 9.1% versus 1.9% for early bilinguals and 10.9% versus 3.4% for late bilinguals, t(35) = 2.681, p = .011, and t(39) = 2.613, p = .013, respectively.

Error Rates for Deaf ASL Signers Versus Monolingual English Speakers

Although analyses directly comparing ASL and English naming latencies suffer from possible confounds of manual versus vocal articulation (e.g., the hand is a much larger and slower articulator than the tongue and lips), comparative analyses with error rate data do not. Therefore, we conducted a 2×2 ANOVA with participant group (deaf ASL signers, English monolingual speakers) as a between-group language factor and frequency (high-frequency items, low-frequency items) as a within-group factor. ASL-naming errors for deaf signers (mean = 6.2%) did not differ significantly from English naming errors for hearing monolingual speakers (mean = 6.6%), both Fs<1. As expected, error rates for low-frequency items (mean = 8.6%) were greater than for high-frequency items (mean = 4.2%), F1(1,43) = 42.164, MSE = .001, p < .001, ηp2 = .50; F2(1,118) = 7.883, MSE = .016, p = .006, ηp2 = .06. In contrast to the pattern of results for the bimodal bilinguals, there was no interaction between language group and lexical frequency, indicating similar-sized frequency effects occur when ASL is the dominant language (for signers) and when English is the dominant language (for speakers), F1(1,43) = .310, MSE = .001, p = .581, ηp2 = .007; F2(1,118) = 0.062, MSE = .007, p = .804, ηp2 = .001.

Interpreters Versus Non-interpreters

Finally, to examine whether interpreting experience had an effect on naming latencies, error rates, or the size of the frequency effect, we conducted ANOVAs for English and for ASL comparing the performance of professional ASL interpreters (N = 21) and bimodal bilinguals who are not interpreters (N = 17). The pattern of main effects for frequency and interactions between language and frequency reported above do not change. In addition, there were no main effects of group, and crucially, interpreting group did not interact with any variable, indicating that interpreting experience does not influence our findings (all p values >.125). For interpreters, the mean ASL and English naming latencies were 1,209ms (SE = 92) and 974ms (SE = 45), and their mean ASL and English error rates were 7.9% (SE = 1.2%) and 4.6% (SE = 0.9%), respectively. For non-interpreters, the mean ASL and English naming latencies were 1,122ms (SE = 83) and 957ms (SE = 50), and their mean error rates were 10.7% (SE = 1.3%) and 5.9% (SE = 1.0%), respectively.


The results we reported demonstrate that English is the dominant language for both early bimodal bilinguals (CODAs) and late bimodal bilinguals. Both bilingual groups rated their proficiency in English as higher than in ASL and made more naming errors in ASL than in English. The results also supported the predictions of the frequency-lag hypothesis for ASL (the nondominant language), but not for English (the dominant language). Specifically, for ASL, bimodal bilinguals were slower, less accurate, and exhibited a larger frequency effect when naming pictures as compared with native deaf signers (Figures 1 and ​2). In addition, bimodal bilinguals exhibited a larger frequency effect in ASL than in English. The larger ASL frequency effect for bimodal bilinguals likely reflects the fact that hearing ASL–English bilinguals use ASL significantly less often than spoken English, and significantly less often than deaf signers for whom ASL is the primary language. As hypothesized by the frequency-lag hypothesis, less frequent ASL use leads to slower lexical retrieval particularly for low-frequency signs for bimodal bilinguals.

Unlike previous studies in which spoken language bilinguals exhibited frequency-lag effects in their dominant language relative to monolinguals, bimodal bilinguals exhibited no evidence of frequency lag in English relative to English speaking monolinguals. They did not name pictures more slowly, and there was no group difference in the size of the frequency effect (Figure 3). These findings support our proposal that code-blending might help bimodal bilinguals avoid the effects of frequency lag on the dominant language (Pyers et al., 2009). That is, unlike unimodal bilinguals who must switch between languages, bimodal bilinguals can (and often do) produce both English words and ASL signs at the same time, which boosts the frequency of usage for both languages. In addition, ASL signs are quite frequently produced with English “mouthings” in which the mouth movements that accompany a manual sign are derived from the pronunciation of the translation equivalent in English (Boyes-Braem & Sutton-Spence, 2001; Nadolske & Rosenstock, 2007). Recent evidence from picture-naming and word-translation tasks with British Sign Language suggests that the mouthings that accompany signs have separate lexico-semantic representations that are based on the English production system (Vinson, Thompson, Skinner, Fox, & Vigliocco, 2010). The bilingual disadvantage for dominant language production may be reduced for bimodal bilinguals because code-blending and mouthing may allow them to produce words in their dominant language nearly as frequently as monolinguals. Thus, the frequency of accessing English during ASL production might ameliorate the bilingual disadvantage for English for ASL–English bilinguals, in contrast to Spanish–English bilinguals who are also English dominant but for whom mouthing and code-blending are not possible.

The fact that deaf ASL signers and monolingual English speakers had similar error rates when naming the same pictures indicates that the difference observed for bimodal bilinguals is not due to a lack of ASL signs for the target pictures or to some other property of the ASL-naming condition. In addition, the fact that language group did not interact with lexical frequency further supports the frequency-lag hypothesis. Assuming that ASL usage by native deaf signers is roughly parallel to spoken English usage by hearing speakers, the size of the frequency effect should be roughly the same for both groups.

Although self-ratings of ASL proficiency were significantly lower for the late bilinguals, late bilinguals were not significantly slower or less accurate to name pictures in ASL than early bilinguals. This pattern suggests that the two groups of bilinguals were relatively well matched in ASL skill and that the late bilinguals may have underestimated their proficiency. Baus, Gutiérrez-Sigut, Quer, and Carreiras (2008) also found no difference in naming latencies or accuracy between early and late deaf signers in a picture-naming study with Catalan Sign Language (CSL), and their participants were all highly skilled signers. It is possible that with sufficient proficiency, age of acquisition may have little effect on the speed or accuracy of lexical retrieval in picture naming (even for relatively low-frequency targets).In addition, we found no significant difference between interpreters and non-interpreters in lexical retrieval performance (naming latencies or error rates). This result is consistent with the findings of Christoffels, de Groot, and Kroll (2006) who reported that Dutch–English interpreters out-performed highly proficient bilinguals (Dutch–English teachers) on working memory tasks (e.g., reading span), but not on lexical retrieval tasks (e.g., picture naming). Christoffels et al. (2006) argue that performance on language tasks is determined by proficiency more than by general cognitive resources. Thus, if the non-interpreters in our study were highly proficient signers, then no group difference between interpreters and non-interpreters in lexical retrieval performance is expected.

Finally, these results validate the use of familiarity ratings as a substitute for corpora-based frequency counts for signed languages in psycholinguistic studies. Currently, there is no available ASL corpus, and one of the largest sign language corpora, the Auslan Corpus, contains only 63,436 signs produced by 109 different signers (as of March, 2011; Johnston, 2012). Although a corpus of over 60,000 annotated signs is an impressive achievement, it does not match the size of the corpora available for English and other spoken languages (e.g., nearly 18 million for English CELEX; Baayen, Piepenbrock, & Gulikers, 1995). But perhaps more importantly for the use of such a corpus for psycholinguistic studies, genre biases may be exaggerated in a corpus of this size. For example, the Auslan Corpus contains clips of many signers retelling the same narratives from prepared texts, cartoons, and picture-books (e.g., The Hare and the Tortoise; The Boy who Cried Wolf; Frog, Where are you?). Thus, the signs BOY, DOG, FROG, WOLF, and TORTOISE all appear within the top 50 most frequent signs, and Johnston (2012) recognizes this finding as a clear genre and text bias within the Auslan corpus. Given such biases, it is not too surprising that little overlap was observed between the familiarity rankings from Vinson et al. (2008) and the frequency rankings from the Auslan corpus. Nonetheless, as argued by Johnston (2012), familiarity ratings are likely to be problematic for classifier constructions and pointing signs (e.g., pronouns) whose use is dependent upon a discourse context.

In sum, the pattern of frequency effects in English and ASL for bimodal bilinguals, deaf ASL signers, and English monolingual speakers argue for the following: (a) English is the dominant language for bimodal bilinguals, even when ASL is acquired natively from birth; (b) less frequent use of ASL by bimodal bilinguals leads to a larger frequency effect for ASL, supporting the frequency-lag hypothesis, and c) the frequent use of mouthing and/or code-blending may shield bimodal bilinguals from the lexical slowing that occurs for spoken language bilinguals for their dominant language.


National Institutes of Health (NIH) (HD047736 to K.E. and San Diego State University and HD050287 to T.G. and the University of California San Diego).

Conflicts of Interest

No conflicts of interest were reported.


The authors thank Lucinda O’Grady, Helsa Borinstein, Rachael Colvin, Danielle Lucien, Lindsay Nemeth, Erica Parker, and Jennie Pyers for assistance with stimuli development, data coding, participant recruitment, and testing. We would also like to thank all of the deaf and hearing participants, without whom this research would not be possible.


1.The pattern of results does not change if fingerspelled responses are eliminated from the analyses.

2.It is not clear why late bilinguals responded more quickly on some items. A possible contributing factor is the slightly higher error rate for late bilinguals (Figure 3B), suggesting a speed-accuracy trade off (however, the difference in error rate was not significant).


  • Baayen R. H., Piepenbrock R., Gulikers L. The CELEX lexical database (CD-ROM) Philadelphia: Philadelphia Linguistic Data Consortium, University of Pennsylvania; (1995).
  • Balota D. A., Piloti M., Cortese M. J. (2001). Subjective frequency estimates for 2,938 monosyllabic words. Memory & Cognition, 29, 639–647doi:10.3758/BF03200465 [PubMed]
  • Bates E., D’Amico S., Jacobsen T., Szekely A., Andonova E., Devescovi A. … Tzeng O. (2003). Timed picture naming in seven languages. Psychonomic Bulletin & Review, 10, 344–380doi:10.3758/BF03196494 [PMC free article][PubMed]
  • Baus C., Gutiérrez-Sigut E., Quer J., Carreiras M. (2008). Lexical access in Catalan signed language (LSC) production. Cognition, 108, 856–865doi:10.1016/j.cognition.2008.05.012 [PubMed]
  • Bialystok E., Craik F. I. M., Grean D. W., Gollan T. H. (2009). Bilingual minds. Psychological Science in the Public Interest, 10(3) 89–129doi:10.1177/1529100610387084 [PubMed]
  • Bishop M. (2006). Bimodal bilingualism in hearing, native users of American Sign Language. (Unpublished doctoral dissertation). Gallaudet University, Washington, DC;
  • Boyes-Braem P., Sutton-Spence R. (Eds.) (2001). The hands are the head of the mouth: The mouth as articulator in sign languages Hamburg, Germany: Signum;
  • Brentari D., Padden C. (2001). A lexicon of multiple origins: Native and foreign vocabulary in American Sign Language. In Brentari D, editor. (Ed.), Foreign vocabulary in sign languages: A cross-linguistic investigation of word formation (pp. 87–119). Mahwah, NJ: Lawrence Erlbaum Associates;
  • Carreiras M., Gutiérrez-Sigut E., Baquero S., Corina D. (2008). Lexical processing in Spanish sign language (LSE). Journal of Memory and Language, 58(1) 100–122doi:10.1016/j.jml.2007.05.004
  • Christoffels I. K., de Groot A. M. B., Kroll J. F. (2006). Memory and language skills in simultaneous interpreters: The role of expertise and language proficiency. Journal of Memory and Language, 54, 324–345doi:10.1016/j.jml.2005.12.004
  • Cohen J. D., MacWhinney B., Flatt M., Provost J. (1993). PsyScope: An interactive graphic system for designing and controlling experiments in the psychology laboratory using Macintosh computers. Behavior Research Methods, Instruments, and Computers, 25(2) 257–271doi:10.3758/BF03204507
  • Duyck W., Vanderelst D., Desmet T., Hartsuiker R. J. (2008). The frequency effect in second-language visual word recognition. Psychonomic Bulletin & Review, 15, 850–855doi: 10.3758/PBR.15.4.850 [PubMed]
  • Emmorey K. (1991). Repetition priming with aspect and agreement morphology in American Sign Language. Journal of Psycholinguistic Research, 20(5) 365–388doi:10.1007/BF01067970 [PubMed]
  • Emmorey K., Borinstein H. B., Thompson R., Gollan T. H. (2008). Bimodal bilingualism. Bilingualism: Language and Cognition, 11(1) 43–61doi:10.1017/S1366728907003203 [PMC free article][PubMed]
  • Emmorey K., Petrich J. A. F., Gollan T. H. (2012). Bilingual processing of ASL-English code-blends: The consequences of accessing two lexical representations simultaneously. Journal of Memory and Language, 67, 199–210doi:10.1016/j.jml.2012.04.005 [PMC free article][PubMed]
  • Forster K. I., Chambers S. M. (1973). Lexical access and naming time. . Journal of Verbal Learning and Verbal Behavior, 12, 627–635doi:10.1016/S0022-5371(73)80042-8
  • Gernsbacher M. A. (1984). Resolving 20 years of inconsistent interactions between lexical familiarity and orthography, concreteness, and polysemy. Journal of Experimental Psychology: General, 113, 256–281doi: 10.1037/0096-3445.113.2.256 [PMC free article][PubMed]
  • Gilhooly K. J., Logie R. H. (1980). Age of acquisition, imagery, concreteness, familiarity, and ambiguity measures for 1,944 words. Behavior, Research, Methods & Instrumentation, 12, 395–427doi: 10.3758/BF03201693
  • Gollan T. H., Acenas L. A. (2004). What is a TOT? Cognate and translation effects on tip-of-the-tongue states in Spanish–English and Tagalog–English bilinguals. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 246–269doi: 10.1037/0278-7393.30.1.246 [PubMed]
  • Gollan T. H., Bonanni M. P., Montoya R. I. (2005). Proper names get stuck on bilingual and monolingual speakers’ tip-of-the tongue equally often. Neuropsychology, 19, 278–287doi: 10.1037/0894-4105.19.3.278 [PubMed]
  • Gollan T. H., Fennema-Notestine C., Montoya R. I., Jernigan T. L. (2007). The bilingual effect on Boston Naming Test performance. Journal of the International Neuropsychological Society, 13, 197–208doi: 10.1017/S1355617707070038 [PubMed]
  • Gollan T. H., Montoya R., Cera C., Sandoval T. (2008). More use almost always means a smaller frequency effect: Aging, bilingualism, and the weaker links hypothesis. Journal of Memory and Language, 58, 787–814doi: 10.1016/j.jml.2007.07.00 [PMC free article][PubMed]
  • Gollan T., Montoya R., Fennema-Notestine C., Morris S. (2005). Bilingualism affects picture naming but not picture classification. Memory & Cognition, 33(7) 1220–1234doi:10.3758/BF0319322 [PubMed]
  • Gollan T. H., Montoya R. I., Werner G. A. (2002). Semantic and letter fluency in Spanish–English bilinguals. Neuropsychology, 16, 562–576doi: 10.1037/0894-4105.16.4.562 [PubMed]
  • Gollan T. H., Silverberg N. B. (2001). Tip-of-the-tongue states in Hebrew-English bilinguals. Bilingualism: Language and Cognition, 4, 63–83doi:10.1017/S136672890100013X
  • Gollan T. H., Slattery T. J., Goldenberg D., van Assche E., Duyck W., Rayner K. (2011). Frequency drives lexical access in reading but not in speaking: The frequency-lag hypothesis. Journal of Experimental Psychology: General, 140(2) 186–209doi:10.1037/a0022256 [PMC free article][PubMed]
  • Gordon B. (1985). Subjective frequency and the lexical decision latency function: Implications for mechanisms of lexical access. Journal of Memory and Language, 24, 631–645doi: 10.1016/0749-596X(85)90050-6
  • Ivanova I., Costa A. (2008). Does bilingualism hamper lexical access in speech production?Acta Psychologica, 127, 277–288doi: 10.1016/j.actpsy.2007.06.003 [PubMed]
  • Johnston T. (2003). BSL, Auslan, and NZSL: Three signed languages or one?In Baker A., van den Bogaerde B., Crasborn O, editors. (Eds.), Cross-linguistic perspectives in sign language research, selected papers form TISLR 2000 (pp. 47–69). Hamburg, Germany: Signum;
  • Johnston T. (2008). The Auslan archive and corpus. In Nathan D, editor. (Ed.), The endangered languages archive London: Hans Rausing Endangered Languages Documentation Project, School of Oriental and African Studies, University of London; Retrieved from xlink:href="">
  • Johnston T. (2012). Lexical frequency in sign languages. Journal of Deaf Studies and Deaf Education, 17(2) 163–193doi: 10.1093/deafed/enr036 [PubMed]
  • Kohnert K., Bates E., Hernandez A. E. (1999). Balancing bilinguals: Lexical-semantic production and cognitive processing in children learning Spanish and English. Journal of Speech, Language, and Hearing Research, 42(6) 1400–1413doi:1092-4399/99/4206-1400 [PubMed]
  • Kohnert K. J., Hernandez A. E., Bates E. (1998). Bilingual performance on the Boston Naming Test: Preliminary norms in Spanish and English. Brain and Language, 65, 422–440doi: 10.1006/brln.1998.2001 [PubMed]
  • Murray W. S., Forster K. I. (2004). Serial mechanisms in lexical access: The rank hypothesis. Psychological Review, 111, 721–756 doi:10.1037/0033-295X.111.3.721 [PubMed]
  • Nadolske M. A., Rosenstock R. (2007). Occurrence of mouthings in American Sign Language: A preliminary study. In Perniss P. M., Pfau R., Steinbach M, editors. (Eds.), Visible variation, (Vol. 188, pp. 35–62) Berlin, New York: Mouton de Gruyter; Retrieved from xlink:href="">
  • Oldfield R. C., Wingfield A. (1965). Response latencies in naming objects. Quarterly Journal of Experimental Psychology, 17, 273–281doi:10.1080/17470216508416445 [PubMed]
  • Padden C. (1998). The ASL lexicon. Sign Language & Linguistics, 1(1) 39–60
  • Petitto L. A., Katerelos M., Levy B. G., Gauna K., Tetreault K., Ferraro V. (2001). Bilingual signed and spoken language acquisition from birth: implications for the mechanisms underlying early bilingual language acquisition. Journal of Child Language, 28(2) 453–496doi:10.1017/S0305000901004718 [PubMed]
  • Portocarrero J. S., Burright R. G., Donovick P. J. (2007). Vocabulary and verbal fluency of bilingual and monolingual college students. Archives of Clinical Neuropsychology, 22, 415–422doi:10.1016/j.acn.2007.01.015 [PubMed]
  • Pyers J., Gollan T. H., Emmorey K. (2009). Bimodal bilinguals reveal the source of tip-of-the-tongue states. Cognition, 112, 323–329doi:10.1016/j.cognition.2009.04.007 [PMC free article][PubMed]
  • Roberts P. M., Garcia L. J., Desrochers A., Hernandez D. (2002). English performance of proficient bilingual adults on the Boston Naming Test. Aphasiology, 16, 635–645doi:10.1080/02687030244000220
  • Rosselli M., Ardila A., Araujo K., Weekes V. A., Caracciolo V., Padilla M., Ostrosky-Solis F. (2000). Verbal fluency and repetition skills in healthy older Spanish-English bilinguals. Applied Neuropsychology, 7, 17–24doi:10.1207/S15324826AN0701_3 [PubMed]
  • Székely A., D’Amico S., Devescovi A., Federmeier K., Herron D., Iyer G. … Bates E. (2003). Timed picture naming: Extended norms and validation against previous studies. Behavioral Research Methods, Instruments, & Computers, 35(4) 621–633doi:10.3758/BF03195542 [PubMed]
  • Vinson D. P., Cormier K. A., Denmark T., Schembri A., Vigliocco G. (2008). The British Sign Language (BSL) norms for acquisition, familiarity, and iconicity. Behavior Research Methods, 40, 1079–1087doi:10.3758/BRM.40.4.1079 [PubMed]
  • Vinson D. P., Thompson R. L., Skinner R., Fox N., Vigliocco G. (2010). The hands and mouth do not always slip together in British Sign Language: Dissociating articulatory channels in the lexicon. Psychological Science, 21(8) 1158–1167doi:10.1177/0956797610377340 [PubMed]

0 thoughts on “Bimodal Bilingualism And The Frequency-Lag Hypothesis Statement”


Leave a Comment

Your email address will not be published. Required fields are marked *