arxiv_dump / txt /2105.11519.txt
billxbf's picture
Upload 101 files
8f1929a verified
raw
history blame
153 kB
Noname manuscript No.
(will be inserted by the editor)
The advent and fall of a vocabulary learning bias from
communicative eciency
David Carrera-Casado Ramon
Ferrer-i-Cancho
Received: date / Accepted: date
Abstract Biosemiosis is a process of choice-making between simultaneously alter-
native options. It is well-known that, when suciently young children encounter
a new word, they tend to interpret it as pointing to a meaning that does not
have a word yet in their lexicon rather than to a meaning that already has a word
attached. In previous research, the strategy was shown to be optimal from an infor-
mation theoretic standpoint. In that framework, interpretation is hypothesized to
be driven by the minimization of a cost function: the option of least communication
cost is chosen. However, the information theoretic model employed in that research
neither explains the weakening of that vocabulary learning bias in older children or
polylinguals nor reproduces Zipf's meaning-frequency law, namely the non-linear
relationship between the number of meanings of a word and its frequency. Here
we consider a generalization of the model that is channeled to reproduce that law.
The analysis of the new model reveals regions of the phase space where the bias
disappears consistently with the weakening or loss of the bias in older children or
polylinguals. The model is abstract enough to support future research on other
levels of life that are relevant to biosemiotics. In the deep learning era, the model is
a transparent low-dimensional tool for future experimental research and illustrates
the predictive power of a theoretical framework originally designed to shed light
on the origins of Zipf's rank-frequency law.
Keywords biosemiosisvocabulary learning mutual exclusivity Zip an laws
information theory quantitative linguistics
David Carrera-Casado & Ramon Ferrer-i-Cancho
Complexity and Quantitative Linguistics Lab
LARCA Research Group
Departament de Ci encies de la Computaci o
Universitat Polit ecnica de Catalunya
Campus Nord, Edi ci Omega
Jordi Girona Salgado 1-3
08034 Barcelona, Catalonia, Spain
E-mail: [email protected],[email protected]:2105.11519v3 [cs.CL] 20 Jul 20212 David Carrera-Casado, Ramon Ferrer-i-Cancho
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 The mathematical model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A The mathematical model in detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
B Form degrees and number of links . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
C Complementary heatmaps for other values of . . . . . . . . . . . . . . . . . . . . . 48
D Complementary gures with discrete degrees . . . . . . . . . . . . . . . . . . . . . . 61
1 Introduction
Biosemiotics can be de ned as a science of signs in living systems (Kull, 1999, p.
386). Here we join the e ort of developing such a science. Focusing on the problem
of \learning" new signs, we hope to contribute (i) to place choice at the core of
semiotic theory of learning (Kull, 2018) and (ii) to make biosemiotics compatible
with the information theoretic perspective that is regarded as currently dominant
in physics, chemistry, and molecular biology (Deacon, 2015).
Languages use words to convey information. From a semantic perspective,
words stand for meanings (Fromkin et al., 2014). Correlates of word meaning
have been investigated in other species (e.g. Hobaiter and Byrne, 2014; Genty and
Zuberb uhler, 2014; Moore, 2014). From a neurobiological perspective, words can
be seen as the counterparts of cell assemblies with distinct cortical topographies
(Pulvermuller, 2001; Pulverm uller, 2013). From a formal standpoint, the essence
of that research is some binding between a sign or a form, e.g., a word or an ape
gesture, and a counterpart, e.g. a 'meaning' or an assembly of cortical cells. Math-
ematically, that binding can be formalized as a bipartite graph where vertices are
forms and their counterparts (Fig. 1). Such abstract setting allows for a powerful
exploration of natural systems across levels of life, from the mapping of animal
vocal or gestural behaviors (Fig. 2 (a)) into their \meanings" down to the map-
ping from codons into amino acids (Figure 2 (b)) while allowing for a comparison
against \arti cial" coding systems such as the Morse code (Fig. 2 (c)) or those
emerging in arti cial naming games (Hurford, 1989; Steels, 1996). In that setting,
almost connectedness has been hypothesized to be the mathematical condition re-
quired for the emergence of a rudimentary form of syntax and symbolic reference
(Ferrer-i-Cancho et al., 2005; Ferrer-i-Cancho, 2006). By symbolic reference, we
mean here Deacon's revision of Pierce's view (Deacon, 1997). The almost connect-
edness condition is met when it is possible to reach practically any other vertex of
the network by starting a walk from any possible vertex (as in Fig. 1 (a)-(b) but
not in Figs. 1 (c)-(d)).
Since the pioneering research of G. K. Zipf (1949), statistical laws of language
have been interpreted as manifestations of the minimization of cognitive costs
(Zipf, 1949; Ellis and Hitchcock, 1986; Ferrer-i-Cancho and D az-Guilera, 2007;
Gustison et al., 2016; Ferrer-i-Cancho et al., 2019). Zipf argued that the law of
abbreviation, the tendency of more frequent words to be shorter, resulted from a
minimization of a cost function involving, for every word, its frequency, its \mass"
and its \distance", which in turn implies the minimization of the size of words
(Zipf, 1949, p.59). Recently, it as been shown mathematically that the minimiza-
tion of the average of the length of words (the mean code length in the languageThe advent and fall of a vocabulary learning bias from communicative eciency 3
(a) (b)
(c) (d)
Fig. 1 A bipartite graph linking forms (white circles) with their counterparts (black circles).
(a) a connected graph (b) an almost connected graph (c) a one-to-one mapping between forms
and counterparts (d) a mapping where only one form is linked with counterparts.
of information theory) predicts a correlation between frequency and duration that
cannot be positive, extending and generalizing previous results from information
theory (Ferrer-i-Cancho et al., 2019). The framework addresses the general prob-
lem of assigning codes as short as possible to counterparts represented by distinct
numbers while warranting certain constraints, e.g., that every number will receive
a distinct code (e.g. non-singular coding in the language of information theory). If
the counterparts are word types from a vocabulary, it predicts the law of abbre-
viation as it occurs in the vast majority of languages (Bentz and Ferrer-i-Cancho,
2016). If these counterparts are meanings, it predicts that more frequent mean-
ings should tend to be assigned smaller codes (e.g., shorter words) as found in real
experiments (Kanwal et al., 2017; Brochhagen, 2021). Table 1 summarizes these
and other predictions of compression.4 David Carrera-Casado, Ramon Ferrer-i-Cancho
(a) (b)
(c)
Fig. 2 Real bipartite graphs linking forms (white circles) with their counterparts (black
circles). (a) Chimpanzee gestures and their meaning (Hobaiter and Byrne, 2014, Table S3).
This table was chosen for its broad coverage of gesture types (see other tables satisfying other
constraints, e.g. only gesture-meaning associations employed by a suciently large number of
individuals). (b) Codon translation into amino acids, where forms are 64 codons and counter-
parts are 20 amino acids (c) The international Morse code, where forms are strings of dots
and dashed and the counterparts are letters of the English alphabet ( A;B;:::;Z ) and digits
(0;1;:::;9).The advent and fall of a vocabulary learning bias from communicative eciency 5
linguistic laws ! principles ! predictions
(K ohler, 1987; Altmann, 1993)
Zipf's law of abbreviation !compression !Menzerath's law
(Gustison et al., 2016; Ferrer-i-Cancho et al., 2019)
!Zipf's rank-frequency law
(Ferrer-i-Cancho, 2016a)
!\shorter words" for more frequent \meanings"
(Ferrer-i-Cancho et al., 2019; Kanwal et al., 2017; Brochhagen, 2021)
Zipf's rank-frequency law !mutual information maximization
+
surprisal minimization!a vocabulary learning bias
(Ferrer-i-Cancho, 2017a)
!the principle of contrast
(Ferrer-i-Cancho, 2017a)
!range or variation of
(Ferrer-i-Cancho, 2005a, 2006)
Table 1 The application of the scienti c method in quantitative linguistics (italics) with various concrete examples (roman). is the exponent of Zipf's
rank-frequency law (Zipf, 1949). The prediction that is the target of the current article is shown in boldface.6 David Carrera-Casado, Ramon Ferrer-i-Cancho
1.1 A family of probabilistic models
The bipartite graph of form-counterpart associations is the skeleton (Figs. 1 and
2) on which a family of models of communication has been built (Ferrer-i-Cancho
and D az-Guilera, 2007; Ferrer-i-Cancho and Vitevitch, 2018). The target of the
rst of these models (Ferrer-i-Cancho and Sole, 2003) was Zipf's rank-frequency
law, that de nes the relationship between the frequency of a word fand its rank
i, approximately as
fi :
These early models were aimed at shedding light on mainly three questions:
1. The origins of this law (Ferrer-i-Cancho and Sole, 2003; Ferrer-i-Cancho, 2005b).
2. The range of variation of in human language (Ferrer-i-Cancho, 2005a, 2006).
3. The relationship between and the syntactic and referential complexity of a
communication system (Ferrer-i-Cancho et al., 2005; Ferrer-i-Cancho, 2006).
The main assumption of these models is that word frequency is an epiphenomenon
of the structure of the skeleton or the probability of the meanings. Following the
metaphor of the skeleton, the models are bodies whose esh are probabilities that
are calculated from the skeleton. The rst models de ned p(sijrj), the probabil-
ity that a speaker produces sigiven a counterpart rj, as the same for all words
connected to rj. In the language of mathematics,
p(sijrj) =aij
!j; (1)
whereaijis a boolean (0 or 1) that indicates if siandrjare connected and !jis
the degree of rj, namely the number of connections of rjwith forms, i.e.
!j=X
iaij:
These models are often portrayed as models of the assignment of meanings to forms
(Futrell, 2020; Piantadosi, 2014) but this description falls short because:
{They are indeed models of production as they de ne the probability of pro-
ducing a form given some counterparts (as in Eq. 1) or simply the marginal
probability of a form. The claim that theories of language production or discourse
do not explain the law (Piantadosi, 2014) has no basis and raises the questions
of which theories of language production are deemed acceptable.
{They are also models of understanding, as they de ne symmetric conditional
probabilities such as p(rjjsi), the probability that a listener interprets rjwhen
receivingsi.
{The models are exible. In addition to \meaning", other counterparts were
deemed possible from their birth. See for instance the use of the term \stimuli"
(e.g. Ferrer-i-Cancho and D az-Guilera, 2007), as a replacement for meaning
that was borrowed from neurolinguistics (Pulvermuller, 2001).
{The models t in the distributional semantics framework (Lund and Burgess,
1996) for two reasons: their exibility, as counterparts can be dimensions in
some hidden space, and also because of representing a form as a vector of their
joint or conditional probabilities with \counterparts" that is inferred from the
network structure, as we have already explained (Ferrer-i-Cancho and Vite-
vitch, 2018).The advent and fall of a vocabulary learning bias from communicative eciency 7
Contrary to the conclusions of (Piantadosi, 2014), there are derivations of Zipf's
law that do account for psychological processes of word production, especially the
intentionality of choosing words in order to convey a desired meaning.
The family of models assume that the skeleton that determines all the prob-
abilities, the bipartite graph, is shaped by a combination of minimization of the
entropy (or surprisal) of words ( H) and the maximization of the mutual infor-
mation between words and meanings ( I), two principles that are cognitively mo-
tivated and that capture speaker and listener's requirements (Ferrer-i-Cancho,
2018). When only the entropy of words is minimized, con gurations where only
one form is linked as in Fig. 1 (d) are predicted. When only the mutual informa-
tion between forms and counterparts is maximized, one-to-one mappings between
forms and counterparts are predicted (when the number of forms and counter-
parts is the same) as in Figure 1 (c) or Fig. 2 (d). Real language is argued to be
in-between these two extreme con gurations (Ferrer-i-Cancho and D az-Guilera,
2007). Such a trade-o between simplicity (Zipf's uni cation) and e ective com-
munication (Zipf's diversi cation) is also found in information theoretic models
of communication based on the information bottleneck approach (see Zaslavsky
et al. (2021) and references there in).
In quantitative linguistics, scienti c theory is not possible without taking into
consideration language laws (K ohler, 1987; Debowski, 2020). Laws are seen as
manifestations of principles (also referred as \requirements" by K ohler (1987)),
which are key components of explanations of linguistic phenomena. As part of
the scienti c method cycle, novel predictions are key aim (Altmann, 1993) and
key to validation and re nement of theory (Bunge, 2001). Table 1 synthesizes this
general view as chains of the form: laws,principles that are inferred from them,
and predictions that are made from those principles, giving concrete examples from
previous research.
Although one of the initial goals of the family of models was to shed light on
the origins of Zipf's law for word frequencies, a member of the family of mod-
els turned out to generate a novel prediction on vocabulary learning in children
and the tendency of words to contrast in meaning (Ferrer-i-Cancho, 2017a): when
encountering a new word, children tend to infer that it refers to a concept that
does not have a word attached to it (Markman and Wachtel, 1988; Merriman
and Bowman, 1989; Clark, 1993). The nding is cross-linguistically robust: it has
been found in children speaking English (Markman and Wachtel, 1988), Canadian
French (Nicoladis and Laurent, 2020), Japanese (Haryu, 1991), Mandarin Chinese
(Byers-Heinlein and Werker, 2013; Hung et al., 2015), Korean (Eun-Nam, 2017).
These languages correspond to four distinct linguistic families (Indo-European,
Japonic, Sino-Tibetan, Koreanic). Furthermore, the nding has also been repli-
cated in adults (Hendrickson and Perfors, 2019; Yurovsky and Yu, 2008) and
other species Kaminski et al. (2004). This phenomenon is a example of biosemio-
sis, namely a process of choice-making between simultaneously alternative options
(Kull, 2018, p. 454).
As an explanation for vocabulary learning, the information theoretic model
su ers from some limitations that motivate the present article. The rst one is that
the vocabulary learning bias weakens in older children (Kalashnikova et al., 2016;
Yildiz, 2020) or in polylinguals (Houston-Price et al., 2010; Kalashnikova et al.,
2015), while the current version of the model predicts the vocabulary learning bias8 David Carrera-Casado, Ramon Ferrer-i-Cancho
Casea(b) Vertex degrees do not exceed one
Casea(a) Counterpart degrees do not exceed one
µk= 2 µk= 1ωj= 1 ωj= 1
Caseb
Caseb
Fig. 3 Strategies for linking a new word to a meaning. Strategy aconsists of linking a word to
a free meaning, namely an unlinked meaning. Strategy bconsists of linking a word to a meaning
that is already linked. We assume that the meaning that is already linked is connected to a
single word of degree k. Two simplifying assumptions are considered. (a) Counterpart degrees
do not exceed one, implying k1. (b) Vertex degrees do not exceed one, implying k= 1.
only provided that mutual information maximization is not neglected (Ferrer-i-
Cancho, 2017a).
The second limitation is inherited from the family of models, where the de -
nition of the probabilities over the bipartite graph skeleton leads to a linear rela-
tionship between the frequency of a form and its number of counterparts (Ferrer-i-
Cancho and Vitevitch, 2018). However, this is inconsistent with Zipf's prediction,
namely that the number of meanings a word of frequency fshould follow (Zipf,
1945)
f; (2)
with= 0:5. Eq. 2 is known as Zipf's meaning-frequency law (Zipf, 1949). To over-
come such a limitation, Ferrer-i-Cancho and Vitevitch (2018) proposed di erent
ways of modifying the de nition of the probabilities from the skeleton. Here we
borrow a proposal of de ning the joint probability of a form and its counterpart
as
p(si;rj)/aij(i!j); (3)
whereis a parameter of the model and iand!jare, respectively, the degree
(number of connections) of the form siand the counterpart rj. Previous research
on vocabulary learning in children with these models (Ferrer-i-Cancho, 2017a)
assumed= 0, which leads to = 1 (Ferrer-i-Cancho, 2016b). When = 1, the
system is channeled to reproduce Zipf's meaning-frequency law, i.e. Eq. 2 with
= 0:5 (Ferrer-i-Cancho and Vitevitch, 2018).
1.2 Overview of the present article
It has been argued that there cannot be meaning without interpretation (Eco,
1986). As Kull (2020) puts it, \ Interpretation (which is the same as primitive decision-
making) assumes that there exists a choice between two or more options. The options
can be described as di erent codes applicable simultaneously in the same situation. "
The main aim to of this article is to shed light on the choice between strategy a,The advent and fall of a vocabulary learning bias from communicative eciency 9
i.e. attaching the new form to a counterpart that is unlinked, and strategy b, i.e.
attaching the new form to a counterpart that is already linked (Fig. 3).
The remainder of the article is organized as follows. Section 2 considers a model
of a communication system that has three components:
1. A skeleton that is de ned by a binary matrix Athat indicates the form-
counterpart connections.
2. A esh that is de ned over the skeleton with Eq. 3,
3. A cost function , that de nes the cost of communication as
=I+ (1)H; (4)
whereis a parameter that regulates the weight of mutual information ( I)
maximization and word entropy ( H) minimization such that 0 1.Iand
Hare inferred from matrix Aand Eq. 3 (further details are given in Section
2).
This section introduces , i.e. the di erence in the cost of communication between
strategyaand strategy baccording to
(Fig. 3). < 0 indicates that the cost
of communication of strategy ais lower than that of b. Our main hypothesis is
that interpretation is driven by the
cost function and that a receiver will choose
the option that minimizes the resulting
. By doing this, we are challenging the
longstanding and limiting belief that information theory is dissociated from semi-
otics and not concerned about meaning (e.g. Deacon, 2015). This article is a just
one counterexample (see also Zaslavsky et al. (2018)). Information theory, as any
abstract powerful mathematical tool, can serve applications that do not assume
meaning (or meaning-making processes) as in the original setting of telecommu-
nication where it was developed by Shannon, as well as others that do, although
they were not his primary concern for historical and sociological reasons.
In general, the formula of is complex and the analysis of the conditions where
ais advantageous (namely <0) requires making some simplifying assumptions.
If= 0, then one obtains that Ferrer-i-Cancho (2017a)
=(!j+ 1) log(!j+ 1)!jlog(!j)
M+ 1; (5)
whereMis the number of edges in the skeleton and !jis the degree of the al-
ready linked counterpart that is selected in strategy b(Fig. 3). Eq. 5 indicates that
strategyawill be advantageous provided that mutual information maximization
matters (i.e.  >0) and its advantage will increase as mutual information max-
imization becomes more important (i.e. for larger ), the linked counterpart has
more connections (i.e. larger !j) or when the skeleton has less connections (i.e.
smallerM). To be able to analyze the case >0, we will examine two classes of
skeleta that are presented next.
Counterpart degrees do not exceed one. In this class, the degrees of counterparts
are restricted to not exceed one, namely a counterpart can only be disconnected
or connected to just one form. If meanings are taken as counterparts, this class
matches the view that \no two words ever have exactly the same meaning" (Fromkin
et al., 2014, p. 256), based on the notion of absolute synonymy (Dangli and Abazaj,
2009). This class also mirrors the linguistic principle that any two words should10 David Carrera-Casado, Ramon Ferrer-i-Cancho
contrast in meaning (Clark, 1987). Alternatively, if synonyms are deemed real
to some extent, this class may capture early stages of language development in
children or early stages in the evolution of languages where synonyms have not
been learned or developed. From a theoretical standpoint, this class is required
by the maximization of the mutual information between forms and counterparts
when the number of forms does not exceed that of counterparts (Ferrer-i-Cancho
and Vitevitch, 2018).
We usekto refer to degree of the word that will be connected to meaning
selected in strategy b(Fig. 3). We will show that, in this class, is determined by
,,kand the degree distribution of forms, namely the vector of form degrees
~ = (1;:::;i;:::n).
Vertex degrees do not exceed one. In this class, the degrees of any vertex are re-
stricted to not exceed one, namely a form (or a meaning) can only be discon-
nected or connected to just one counterpart (just one form). This class is narrower
than the previous one because it imposes that degrees do not exceed one both for
forms and counterparts. Words lack homonymy (or polysemy). We believe that this
class would correspond to even earlier stages of language development in children
(where children have learned at most one meaning of a word) or earlier stages
in the evolution of languages (where the communication system has not devel-
oped any homonymy). From a theoretical stand point, that class is a requirement
of maximizing mutual information between forms and counterparts when n=m
(Ferrer-i-Cancho and Vitevitch, 2018). We will show that is determined just by
,andM, the number of links of the bipartite skeleton.
Notice that meanings with synonyms have been found in chimpanzee gestures
(Hobaiter and Byrne, 2014), which suggests that the two classes above do not
capture the current state of the development of form-counterpart mappings in
adults of other species. Section 2 presents the formulae of for each classes. Section
3 uses this formulae to explore the conditions that determine when strategy ais
more advantageous, namely  < 0, for each of the two classes of skeleta above,
that correspond to di erent stages of the development of language in children.
While the condition = 0 implies that strategy ais always advantageous when
>0, we nd regions of the space of parameters where this is not the case when
>0 and>0. In the more restrictive class, where vertex degrees do not exceed
one, we nd a region where ais not advantageous when is suciently small and
Mis suciently large. The size of that region increases as increases. From a
complementary perspective, we nd a region where ais not advantageous ( 0)
whenis suciency small and is suciently large; the size of the region increases
asMincreases. As Mis expected to be larger in older children or in polylinguals
(if the forms of each language are mixed in the same skeleton), the model predicts
the weakening of the bias in older children and polylinguals (Liittschwager and
Markman, 1994; Kalashnikova et al., 2016; Yildiz, 2020; Houston-Price et al., 2010;
Kalashnikova et al., 2015, 2019). To ease the exploration of the phase space for
the class where the degrees of counterparts do not exceed one, we will assume
that word frequencies follow Zipf's rank-frequency law. Again, regions where a
is not advantageous ( 0) also appear but the conditions for the emergence
of this regions are more complex. Our preliminary analyses suggest that the bias
should weaken in older children even for this class. Section 4 discusses the ndings,The advent and fall of a vocabulary learning bias from communicative eciency 11
suggests future research directions and reviews the research program in light of
the scienti c method.
2 The mathematical model
Below we give more details about the model that we use to investigate the learning
of new words and outlines the arguments that take from Eq. 3 to concrete formulae
of. Section 2.1 just presents the concrete formulae for each of the two classes
of skeleta. Full details are given in Appendix A. The model has four components
that we review next.
Skeleton (A=aij).A bipartite graph that de nes the associations between nforms
andmcounterparts that are de ned by an adjacency matrix A=faijg.
Flesh (p(si;rj)).The esh consist of a de nition of p(si;rj), the joint probability
of a form (or word) and a counterpart (or meaning) and a series of probability
de nitions stemming from it. Probabilities depart from previous work (Ferrer-i-
Cancho and Sole, 2003; Ferrer-i-Cancho, 2005b) by the addition of the parameter
. Eq. 3 de nes p(si;rj) as proportional to the product of the degrees of the form
and the counterpart to the power of , which is a parameter of the model. By
normalization, namely
nX
i=1mX
j=1p(si;rj) = 1;
Eq. 3 leads to
p(si;rj) =1
Maij(i!j); (6)
where
M=nX
i=1mX
j=1aij(i!j): (7)
From these expressions, the marginal probabilities of a form p(si) and a counter-
partp(rj) are obtained easily thanks to
p(si) =mX
j=1p(si;rj)
p(rj) =nX
i=1p(si;rj):
The cost of communication (
).The cost function is initially de ned in Eq. 4 as in
previous research (e.g. Ferrer-i-Cancho and D az-Guilera, 2007). In more detail,
=I(S;R) + (1)H(S); (8)
whereI(S;R) is the mutual information between forms from a repertoire Sand
counterparts from a repertoire R, andH(S) is the entropy (or surprisal) of forms12 David Carrera-Casado, Ramon Ferrer-i-Cancho
from a repertoire S. Knowing that I(S;R) =H(S) +H(R)H(S;R) Cover and
Thomas (2006), the nal expression for the cost function in this article is
() = (12)H(S)H(R) +H(S;R): (9)
The entropies H(S),H(R) andH(S;R) are easy to calculate applying the de ni-
tions ofp(si),p(rj) andp(si;rj), respectively.
The di erence in the cost of learning a new word ( ).There are two possible strate-
gies to determine the counterpart with which a new form (a previously unlinked
form) should connect (Fig. 3):
a. Connect the new form to a counterpart that is not already connected to any
other forms.
b. Connect the new form to a counterpart that is connected to at least one other
form.
The question we intend to answer is \when does strategy aresult in a smaller cost
than strategy b?" Or, in the terminology of child language research, \for which
strategy is the assumption of mutual exclusivity more advantageous?" To answer
these questions, we de ne , as a the di erence between the cost of each strategy.
More precisely,
() =
0
a()
0
b(); (10)
where
0a() and
0
b() are the new value of
when a new link is created using
strategyaorbrespectively. Then, our research question becomes \When is  <
0?".
Formulae for
0a() and
0
b() are derived in two steps. First, analyzing a
general problem, i.e.
0, the new value of
after producing a single mutation in
A(Appendix A.2). Second, deriving expressions for the case where that mutation
results from linking a new form (an unlinked form) to a counterpart, that can be
linked or unlinked (Appendix A.3).
2.1in two classes of skeleta
In previous work, the value of was already calculated for = 0, obtaining
expressions equivalent to Eq. 5 (see Appendix A.3.1 for a derivation). The next
sections just summarize the more complex formulae that are obtained for each
class of skeleta for 0 (see Appendix A for details on the derivation).
2.1.1 Vertex degrees do not exceed one
Here forms and counterparts both either have a single connection or are discon-
nected. Mathematically, this can be expressed as
i2f0;1gfor eachisuch that 1in
!j2f0;1gfor eachjsuch that 1jm:
Fig. 3 (b) o ers a visual representation of a bipartite graph of this class. In case b,
the counterpart we connect the new form to is connected to only one form ( !j= 1)The advent and fall of a vocabulary learning bias from communicative eciency 13
and that form is connected to only one counterpart ( k= 1). Under this class, 
becomes
() = (12)
log
1 +2(21)
M+ 1
+2+1log(2)
M+ 2+11
2+1log(2)
M+ 2+11;(11)
which can be rewritten as linear function of , i.e.
() =a+b;
with
a= 2 log
1 +2(21)
M+ 1
(2+ 1)2+1log(2)
M+ 2+11
b=log
1 +2(21)
M+ 1
+2+1log(2)
M+ 2+11:
Importantly, notice that this expression of is determined only by ,andM(the
total number of links in the model). See Appendix A.3.3 for thorough derivations.
2.1.2 Counterpart degrees do not exceed one
This class of skeleta is a relaxation of the previous class. Counterparts are either
connected to a single form or disconnected. Mathematically,
!j2f0;1gfor eachjsuch that 1jm:
Fig. 3 (a) o ers a visual representation of a bipartite graph of this class. The
number of forms the counterpart in case bis connected to is still 1 ( !j= 1) but
this form may be connected to any number of counterparts; khas to satisfy
1km.
Under this class, becomes
() = (12)(
log
M+ 1
M+(21)
k+ 2!
+1
M+(21)
k+ 2"
(+ 1)X(S;R)(21)(
k+ 1)
M+ 1
2log(2) +
kh
log(k)(k+)
(k1 + 2) log(k1 + 2)i#)
1
M+(21)
k+ 2"


k+ 12log

k+ 1
(1)2
klog(k)#
;(12)14 David Carrera-Casado, Ramon Ferrer-i-Cancho
where
X(S;R) =nX
i=1+1
ilogi (13)
M=nX
i=1+1
i: (14)
Eq. 12 can also be expressed as a linear function of as
() =a+b;
with
a= 2 log
M+ (21)
k+ 2
M+ 1!
1
M+ (21)
k+ 2(
2h
(
k+ 1) log(
k+ 1) +
klog(k)i
+2h
(+ 1)X(S;R)(21)
k+ 1
M+ 1
+2log(2)
klog(k)(k+)(k1 + 2) log(k1 + 2)i)
b=log
M+ (21)
k+ 2
M+ 1!
+1
M+ (21)
k+ 2(
2
klog(k)(+ 1)X(S;R)(21)
k+ 1
M+ 1
+2log(2)
kh
log(k)(k+)(k1 + 2) log(k1 + 2)i)
:
Being a relaxation of the previous class, the resulting expressions of are more
complex than those of the previous class, which are an in turn more complex than
those of the case = 0 (Eq. 5). See Appendix A.3.2 for further details on the
derivation of .
Notice that X(S;R) (Eq. 13) and M(Eq. 14) are determined by the degrees
of the forms ( i's). To explore the phase space with a realistic distribution of i's,
we assume, without any loss of generality, that the i's are sorted decreasingly,
i.e.12:::ii+1:::n. In addition, we assume
1.n= 0, because we are investigating the problem of linking and unlinked form
with counterparts.
2.n1= 1.
3. Form degrees are continuous.
4. The relationship between iand its frequency rank is a right-truncated power-
law, i.e.
i=ci(15)
for 1in1.The advent and fall of a vocabulary learning bias from communicative eciency 15
Appendix B shows that forms then follow Zipf's rank-frequency law, i.e.
p(si) =c0i
with
=(+ 1)
c0=(n1)
M:
The value of is determined by ,,kand the sequence of degrees of the
forms, which we have parameterized with andn. When=
+1= 0, namely
when = 0 or when !1 , we recover the class where vertex degrees do not
exceed one but with just one form that is unlinked.
A continuous approximation to the number of edges gives (Appendix B)
M= (n1)
+1n1X
i=1i
+1: (16)
We aim to shed some light on the possible trajectory that children will describe
on Fig. 4 as they become older. One expects that Mtends to increase as children
become older, due to word learning. It is easy to see that Eq. 16 predicts that, if 
and remain constant, Mis expected to increase as nincreases (Fig. 4). Besides,
whennremains constant, a reduction of implies a reduction of Mwhen= 0
but that e ect vanishes for >0 (Fig. 4). Obviously, ntends to increase as a child
becomes older (Saxton, 2010) and thus children's trajectory will be from left to
right in Fig. 4. As for the temporal evolution of , there are two possibilities. Zipf's
pioneering investigations suggest that remains close to 1 over time in English
children (Zipf, 1949, Chapter IV). In contrast, a wider study reported a tendency of
to decrease over time in suciently old children of di erent languages (Baixeries
et al., 2013) but the study did not determine the actual number of children where
that trend was statistically signi cant after controlling for multiple comparisons.
Then children, as they become older, are likely to move either from left to right,
keeping constant, or from the left-upper corner (high , lown) to the bottom-
right corner (low , highn) within each panel of Fig. 4. When is suciently
large, the actual evolution of some children (decrease of jointly with an increase
ofn) is dominated by the increase of Mthat the growth of nimplies in the long
run (Fig. 4).
When exploring the space of parameters, we must warrant that kdoes not
exceed the maximum degree that n,and yield, namely k1, where1is
de ned according to Eq. 15 with i= 1, i.e.
k1
=c
= (n1)
= (n1)
+1: (17)16 David Carrera-Casado, Ramon Ferrer-i-Cancho
0.00.51.01.52.0
0 250 500 750 1000
0246log10 Mφ = 0(a)
0.00.51.01.52.0
0 250 500 750 1000
01234log10 Mφ = 0.5(b)
0.00.51.01.52.0
0 250 500 750 1000
0123log10 Mφ = 1(c)
0.00.51.01.52.0
0 250 500 750 1000
0123log10 Mφ = 1.5(d)
0.00.51.01.52.0
0 250 500 750 1000
0123log10 Mφ = 2(e)
0.00.51.01.52.0
0 250 500 750 1000
0123log10 Mφ = 2.5(f)
Fig. 4 log10M, the logarithm of the number of links M, as a function of n(x-axis) and
(y-axis) according to Eq. 16. log10Mis used instead of Mto capture changes in order of
magnitude of M. (a)= 0, (b)= 0:5, (c)= 1, (d)= 1:5, (e)= 2 and (f) = 2:5.
3 Results
Here we will analyze , that takes a negative value when strategy a(linking a
new form to a new counterpart) is more advantageous than strategy blinking a
new form to an already connected counterpart), and a positive value otherwise.
jjindicates the strength of the bias towards strategy aif<0; towards strategyThe advent and fall of a vocabulary learning bias from communicative eciency 17
bif>0. Therefore, when <0, the smaller the value of , the higher the bias
for strategy awhereas when >0, the greater the value of , the higher the bias
for strategy b. Each class of skeleta is analyzed separately, beginning by the most
restrictive class.
3.1 Vertex degrees do not exceed one
In this class of skeleta, corresponding to younger children, depends only on ,M
and. We will explore the phase space with the help of two-dimensional heatmaps
ofwhere thex-axis is always and they-axis isMor.
Figs. 5 and 6 reveal regions where strategy ais more advantageous (red) and
regions where bis more advantageous (blue) according to . The extreme situation
is found when = 0 where a single red region covers practically all space except for
= 0 (Fig. 5, top-left) as expected from previous work (Ferrer-i-Cancho, 2017a)
and Eq. 5. Figs. 7 and 8 summarize these nding of regions, displaying the curve
that de nes the boundary between strategies aandb(= 0).
Figs. 7 and 8 show that strategy bis the optimal only if is suciently low,
namely when the weight of entropy minimization is suciently high compared to
that of mutual information maximization. Fig. 7 shows that the larger the value of
the larger the number of links ( M) that is required for strategy bto be optimal.
Fig. 7 also indicates that the larger the value of , the broader the blue region
wherebis optimal. From a symmetric perspective, Fig. 8 shows that the larger the
value ofthe larger the value of that is required for strategy bto be optimal and
also that the larger the number of links ( M), the broader the blue region where b
is optimal.
3.2 Counterpart degrees do not exceed one
For this class of skeleta, corresponding to older children, we have assumed that
word frequencies follow Zipf's rank-frequency law, namely the relationship between
the probability of a form (the number of counterparts connected to each form) and
its frequency rank follows a right-truncated power-law with exponent (Section
2). Thendepends only on (the exponent of the right-truncated power law),
n(the number of forms), k(the degree of the form linked to the counterpart
in strategy bas shown in Fig. 3), and. We will explore the phase space with
the help of two-dimensional heatmaps of where the x-axis is always and the
y-axis isk, orn. While in the class where vertex degrees do not exceed one
we have found only one blue region (a region where  > 0 meaning that bis
more advantageous), this class yields up to two distinct blue regions located in
opposite corners of the heatmap while keeping always a red region as show in
Figs. 10, 12 and 14 for = 1 from di erent perspectives. For the sake of brevity,
this section only presents heatmaps of for= 0 or= 1 (see Appendix C for
the remainder). A summary of exploration of the parameter space follows.
Heatmaps of as a function of andk.The heatmaps of for di erent com-
binations of parameters in Figs. 9, 10, 16, 17, 18 and 19 are summarized in Fig.
11, showing the frontiers between regions where = 0. Notice how, for = 0,18 David Carrera-Casado, Ramon Ferrer-i-Cancho
0255075100
0.00 0.25 0.50 0.75 1.00
λM
-0.6-0.4-0.2Δ < 0
0Δ ≥ 0φ = 0(a)
0255075100
0.00 0.25 0.50 0.75 1.00
λM
-0.6-0.4-0.2Δ < 0
0.0050.010Δ ≥ 0φ = 0.5(b)
0255075100
0.00 0.25 0.50 0.75 1.00
λM
-0.6-0.4-0.2Δ < 0
0.000.010.020.030.040.05Δ ≥ 0φ = 1(c)
0255075100
0.00 0.25 0.50 0.75 1.00
λM
-0.6-0.4-0.2Δ < 0
0.0250.0500.0750.1000.125Δ ≥ 0φ = 1.5(d)
0255075100
0.00 0.25 0.50 0.75 1.00
λM
-0.6-0.4-0.2Δ < 0
0.000.050.100.150.20Δ ≥ 0φ = 2(e)
0255075100
0.00 0.25 0.50 0.75 1.00
λM
-0.8-0.6-0.4-0.2Δ < 0
0.10.20.3Δ ≥ 0φ = 2.5(f)
Fig. 5, the di erence between the cost of strategy aand strategy b, as a function of M, the
number of links and , the parameter that controls the balance between mutual information
maximization and entropy minimization, when vertex degrees do not exceed one (Eq. 11). Red
indicates that strategy ais more advantageous while blue indicates that bis more advantageous.
The lighter the red, the stronger the bias for strategy a. The lighter the blue, the stronger the
bias for strategy b. (a)= 0, (b)= 0:5, (c)= 1, (d)= 1:5, (e)= 2 and (f) = 2:5.The advent and fall of a vocabulary learning bias from communicative eciency 19
0.02.55.07.510.0
0.00 0.25 0.50 0.75 1.00
λφ
-1.00-0.75-0.50-0.25Δ < 0
0.00.10.20.30.4Δ ≥ 0M = 2(a)
0.02.55.07.510.0
0.00 0.25 0.50 0.75 1.00
λφ
-1.0-0.5Δ < 0
0.00.20.40.6Δ ≥ 0M = 3(b)
0.02.55.07.510.0
0.00 0.25 0.50 0.75 1.00
λφ
-1.5-1.0-0.5Δ < 0
0.000.250.500.751.00Δ ≥ 0M = 5(c)
0.02.55.07.510.0
0.00 0.25 0.50 0.75 1.00
λφ
-2.0-1.5-1.0-0.5Δ < 0
0.00.40.81.21.6Δ ≥ 0M = 10(d)
0.02.55.07.510.0
0.00 0.25 0.50 0.75 1.00
λφ
-3-2-1Δ < 0
0123Δ ≥ 0M = 50(e)
0.02.55.07.510.0
0.00 0.25 0.50 0.75 1.00
λφ
-4-3-2-1Δ < 0
0123Δ ≥ 0M = 150(f)
Fig. 6, the di erence between the cost of strategy aand strategy b, as a function of ,
the parameter that de nes how the esh of the model from the skeleton, and , the parameter
that controls the balance between mutual information maximization and entropy minimization
(Eq. 11). Red indicates that strategy ais more advantageous while blue indicates that bis
more advantageous. The lighter the red, the stronger the bias for strategy a. The lighter the
blue, the stronger the bias for strategy b. (a)M= 2, (b)M= 3, (c)M= 5, (d)M= 10, (e)
M= 50 and (f) M= 150.20 David Carrera-Casado, Ramon Ferrer-i-Cancho
0255075100
0.00 0.25 0.50 0.75 1.00
λMφ
0
0.5
1
1.5
2
2.5
Fig. 7 Summary of the boundaries between positive and negative values of when vertex
degrees do not exceed one (Fig. 5). Each curve shows the points where = 0 (Eq. 12) as a
function of andMfor distinct values of .
strategyais optimal for all values of >0, as one would expect from Eq. 5. The
remainder of the gures show how the shape of the two areas changes with each
of the parameters. For small nand , a single blue region indicates that strategy
bis more advantageous than awhenis closer to 0 and kis higher. For higher
nor an additional blue region appears indicating that strategy bis also optimal
for high values of and low values of k.
Heatmaps of as a function of and .The heatmaps of for di erent combi-
nations of parameters in Figs. 12, 20, 21, 22 and 23 are summarized in Fig. 13,
showing the frontiers between regions. There is a single region where strategy bis
optimal for small values of kand, but for larger values a second blue region
appears.
Heatmaps of as a function of andn.The heatmaps of for di erent combina-
tions of parameters in Figs. 14, 24, 25, 26 and 27 are summarized in Fig. 15. Again,
one or two blue regions appear depending on the combination of parameters.
See Appendix D for the impact of using discrete form degrees on the results
presented in this section.The advent and fall of a vocabulary learning bias from communicative eciency 21
0.02.55.07.510.0
0.00 0.25 0.50 0.75 1.00
λφM
2
3
5
10
50
150
Fig. 8 Summary of the boundaries between positive and negative values of when vertex
degrees do not exceed one (Fig. 6). Each curve shows the points where = 0 (Eq. 12) as a
function of andfor distinct values of M.22 David Carrera-Casado, Ramon Ferrer-i-Cancho
1.01.52.02.53.0
0.00 0.25 0.50 0.75 1.00
λμk
-0.075-0.050-0.025Δ < 0
0Δ ≥ 0φ = 0 α = 0.5 n = 10(a)
2.55.07.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.05-0.04-0.03-0.02-0.01Δ < 0
0Δ ≥ 0φ = 0 α = 1 n = 10(b)
01020
0.00 0.25 0.50 0.75 1.00
λμk
-0.025-0.020-0.015-0.010-0.005Δ < 0
0Δ ≥ 0φ = 0 α = 1.5 n = 10(c)
2.55.07.510.0
0.00 0.25 0.50 0.75 1.00
λμk
-0.006-0.004-0.002Δ < 0
0Δ ≥ 0φ = 0 α = 0.5 n = 100(d)
0255075100
0.00 0.25 0.50 0.75 1.00
λμk
-0.0025-0.0020-0.0015-0.0010-0.0005Δ < 0
0Δ ≥ 0φ = 0 α = 1 n = 100(e)
02505007501000
0.00 0.25 0.50 0.75 1.00
λμk
-5e-04-4e-04-3e-04-2e-04-1e-04Δ < 0
0Δ ≥ 0φ = 0 α = 1.5 n = 100(f)
0102030
0.00 0.25 0.50 0.75 1.00
λμk
-6e-04-4e-04-2e-04Δ < 0
0Δ ≥ 0φ = 0 α = 0.5 n = 1000(g)
02505007501000
0.00 0.25 0.50 0.75 1.00
λμk
-0.00015-0.00010-0.00005Δ < 0
0Δ ≥ 0φ = 0 α = 1 n = 1000(h)
0100002000030000
0.00 0.25 0.50 0.75 1.00
λμk
-1.6e-05-1.2e-05-8.0e-06-4.0e-06Δ < 0
0Δ ≥ 0φ = 0 α = 1.5 n = 1000(i)
Fig. 9, the di erence between the cost of strategy aand strategy b, as a function of k,
the degree of the form linked to the counterpart in strategy bas shown in Fig. 3, the number of
links and, the parameter that controls the balance between mutual information maximization
and entropy minimization, when the degrees of counterparts do not exceed one (Eq. 11) and
= 0. Red indicates that strategy ais more advantageous while blue indicates that bis more
advantageous. The lighter the red, the stronger the bias for strategy a. The lighter the blue,
the stronger the bias for strategy b. Each heatmap corresponds to a distinct combination of
nand . The heatmaps are arranged, from left to right, with = 0:5;1;1:5 and, from top to
bottom, with n= 10;100;1000. (a) = 0:5 andn= 10, (b) = 1 andn= 10, (c) = 1:5
andn= 10, (d) = 0:5 andn= 100, (e) = 1 andn= 100, (f) = 1:5 andn= 100, (g)
= 0:5 andn= 1000, (h) = 1 andn= 1000, (i) = 1:5 andn= 1000.The advent and fall of a vocabulary learning bias from communicative eciency 23
1.01.21.41.6
0.00 0.25 0.50 0.75 1.00
λμk
-0.25-0.20-0.15-0.10-0.05Δ < 0
0.020.040.06Δ ≥ 0φ = 1 α = 0.5 n = 10(a)
1.01.52.02.53.0
0.00 0.25 0.50 0.75 1.00
λμk
-0.20-0.15-0.10-0.05Δ < 0
0.020.040.06Δ ≥ 0φ = 1 α = 1 n = 10(b)
12345
0.00 0.25 0.50 0.75 1.00
λμk
-0.10-0.05Δ < 0
0.010.020.030.040.05Δ ≥ 0φ = 1 α = 1.5 n = 10(c)
1.01.52.02.53.0
0.00 0.25 0.50 0.75 1.00
λμk
-0.04-0.03-0.02-0.01Δ < 0
0.0050.0100.0150.0200.025Δ ≥ 0φ = 1 α = 0.5 n = 100(d)
2.55.07.510.0
0.00 0.25 0.50 0.75 1.00
λμk
-0.04-0.03-0.02-0.01Δ < 0
0.010.020.03Δ ≥ 0φ = 1 α = 1 n = 100(e)
0102030
0.00 0.25 0.50 0.75 1.00
λμk
-0.020-0.015-0.010-0.005Δ < 0
0.0050.0100.015Δ ≥ 0φ = 1 α = 1.5 n = 100(f)
12345
0.00 0.25 0.50 0.75 1.00
λμk
-0.0100-0.0075-0.0050-0.0025Δ < 0
0.0020.0040.006Δ ≥ 0φ = 1 α = 0.5 n = 1000(g)
0102030
0.00 0.25 0.50 0.75 1.00
λμk
-0.010-0.005Δ < 0
0.00250.00500.00750.01000.0125Δ ≥ 0φ = 1 α = 1 n = 1000(h)
050100150
0.00 0.25 0.50 0.75 1.00
λμk
-0.004-0.003-0.002-0.001Δ < 0
0.0010.0020.0030.004Δ ≥ 0φ = 1 α = 1.5 n = 1000(i)
Fig. 10 Same as in Fig. 9 but with = 1.24 David Carrera-Casado, Ramon Ferrer-i-Cancho
1.21.51.82.1
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 0.5 n = 10(a)
234
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 1 n = 10(b)
2.55.07.5
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 1.5 n = 10(c)
1234
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 0.5 n = 100(d)
05101520
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 1 n = 100(e)
0255075100
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 1.5 n = 100(f)
2.55.07.510.0
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 0.5 n = 1000(g)
0255075100
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 1 n = 1000(h)
02505007501000
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 1.5 n = 1000(i)
Fig. 11 Summary of the boundaries between positive and negative values of when the
degrees of counterparts do not exceed one ( gures 9, 10, 16, 17, 18 and 19). Each curve shows
the points where = 0 (Eq. 12) as a function of andkfor distinct values of . (a) = 0:5
andn= 10, (b) = 1 andn= 10, (c) = 1:5 andn= 10, (d) = 0:5 andn= 100, (e) = 1
andn= 100, (f) = 1:5 andn= 100, (g) = 0:5 andn= 1000, (h) = 1 andn= 1000, (i)
= 1:5 andn= 1000.The advent and fall of a vocabulary learning bias from communicative eciency 25
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.10-0.05Δ < 0
0.0050.0100.0150.020Δ ≥ 0φ = 1 μk = 1 n = 10(a)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.0100-0.0075-0.0050-0.0025Δ < 0
0.0010.0020.0030.004Δ ≥ 0φ = 1 μk = 1 n = 100(b)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.00075-0.00050-0.00025Δ < 0
1e-042e-043e-044e-045e-04Δ ≥ 0φ = 1 μk = 1 n = 1000(c)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.3-0.2-0.1Δ < 0
0.0250.0500.0750.100Δ ≥ 0φ = 1 μk = 2 n = 10(d)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.025-0.020-0.015-0.010-0.005Δ < 0
0.0020.0040.006Δ ≥ 0φ = 1 μk = 2 n = 100(e)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.0020-0.0015-0.0010-0.0005Δ < 0
1e-042e-043e-044e-045e-04Δ ≥ 0φ = 1 μk = 2 n = 1000(f)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.6-0.4-0.2Δ < 0
0.10.20.30.4Δ ≥ 0φ = 1 μk = 4 n = 10(g)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.06-0.04-0.02Δ < 0
0.010.020.030.04Δ ≥ 0φ = 1 μk = 4 n = 100(h)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.006-0.004-0.002Δ < 0
0.0010.0020.003Δ ≥ 0φ = 1 μk = 4 n = 1000(i)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-1.0-0.5Δ < 0
0.30.60.9Δ ≥ 0φ = 1 μk = 8 n = 10(j)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.16-0.12-0.08-0.04Δ < 0
0.050.10Δ ≥ 0φ = 1 μk = 8 n = 100(k)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.016-0.012-0.008-0.004Δ < 0
0.00250.00500.00750.01000.0125Δ ≥ 0φ = 1 μk = 8 n = 1000(l)
Fig. 12, the di erence between the cost of strategy aand strategy b, as a function of , the
exponent of the rank-frequency law, and , the parameter that controls the balance between
mutual information maximization and entropy minimization, when the degrees of counterparts
do not exceed one (Eq. 11) and = 1. Red indicates that strategy ais more advantageous
while blue indicates that bis more advantageous. The lighter the red, the stronger the bias for
strategya. The lighter the blue, the stronger the bias for strategy b. Each heatmap corresponds
to a distinct combination of nandk. The heatmaps are arranged, from left to right, with
n= 10;100;1000 and, from top to bottom, with k= 1;2;4;8. Gray indicates regions where
kexceeds the maximum degree according to other parameters (Eq. 17). (a) k= 1 and
n= 10, (b)k= 1 andn= 100, (c)k= 1 andn= 1000, (d) k= 2 andn= 10, (e)k= 2
andn= 100, (f) k= 2 andn= 1000, (g) k= 4 andn= 10, (h)k= 4 andn= 100,
(i)k= 4 andn= 1000, (j) k= 8 andn= 10, (k)k= 8 andn= 100, (l)k= 8 and
n= 1000.26 David Carrera-Casado, Ramon Ferrer-i-Cancho
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
1
1.5
2
2.5μk = 1 n = 10(a)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 1 n = 100(b)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 1 n = 1000(c)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 2 n = 10(d)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 2 n = 100(e)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 2 n = 1000(f)
0.60.81.01.21.4
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1μk = 4 n = 10(g)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 4 n = 100(h)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 4 n = 1000(i)
1.01.11.21.31.41.5
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5μk = 8 n = 10(j)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2μk = 8 n = 100(k)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 8 n = 1000(l)
Fig. 13 Summary of the boundaries between positive and negative values of when the
degrees of counterparts do not exceed one (Figs. 12, 20, 21, 22 and 23). Each curve shows
the points where = 0 (Eq. 12) as a function of and for distinct values of . Points are
restricted to combinations of parameters where kdoes not exceed the maximum (Eq. 17).
Each distinct heatmap corresponds to a distinct combination of kandn. (a)k= 1 and
n= 10, (b)k= 1 andn= 100, (c)k= 1 andn= 1000, (d) k= 2 andn= 10, (e)k= 2
andn= 100, (f) k= 2 andn= 1000, (g) k= 4 andn= 10, (h)k= 4 andn= 100,
(i)k= 4 andn= 1000, (j) k= 8 andn= 10, (k)k= 8 andn= 100, (l)k= 8 and
n= 1000.The advent and fall of a vocabulary learning bias from communicative eciency 27
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.10-0.05Δ < 0φ = 1 μk = 1 α = 0.5(a)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.06-0.04-0.02Δ < 0
0.0010.0020.003Δ ≥ 0φ = 1 μk = 1 α = 1(b)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.06-0.04-0.02Δ < 0
0.0050.0100.0150.020Δ ≥ 0φ = 1 μk = 1 α = 1.5(c)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.3-0.2-0.1Δ < 0
0.0250.0500.0750.100Δ ≥ 0φ = 1 μk = 2 α = 0.5(d)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.09-0.06-0.03Δ < 0
3e-046e-049e-04Δ ≥ 0φ = 1 μk = 2 α = 1(e)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.06-0.04-0.02Δ < 0
0.00250.00500.00750.01000.0125Δ ≥ 0φ = 1 μk = 2 α = 1.5(f)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.6-0.4-0.2Δ < 0
0.10.20.30.4Δ ≥ 0φ = 1 μk = 4 α = 0.5(g)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.3-0.2-0.1Δ < 0
0.040.080.120.16Δ ≥ 0φ = 1 μk = 4 α = 1(h)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.08-0.06-0.04-0.02Δ < 0
0.0020.0040.006Δ ≥ 0φ = 1 μk = 4 α = 1.5(i)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-1.0-0.5Δ < 0
0.30.60.9Δ ≥ 0φ = 1 μk = 8 α = 0.5(j)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.6-0.4-0.2Δ < 0
0.10.20.30.40.5Δ ≥ 0φ = 1 μk = 8 α = 1(k)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.2-0.1Δ < 0
0.050.100.15Δ ≥ 0φ = 1 μk = 8 α = 1.5(l)
Fig. 14, the di erence between the cost of strategy aand strategy b, as function of n, the
number of forms, and , the parameter that controls the balance between mutual information
maximization and entropy minimization, when the degrees of counterparts do not exceed one
(Eq. 11) and = 1. We are taking values of nfrom 10 onwards (instead of one onwards) to see
more clearly the light regions that are re ected on the color scales. Red indicates that strategy
ais more advantageous while blue indicates that bis more advantageous. The lighter the red,
the stronger the bias for strategy a. The lighter the blue, the stronger the bias for strategy b.
Each heatmap corresponds to a distinct combination of kand . The heatmaps are arranged,
from left to right, with = 0:5;1;1:5 and, from top to bottom, with k= 1;2;4;8. Gray
indicates regions where kexceeds the maximum degree according to other parameters (Eq.
17). (a)k= 1 and = 0:5, (b)k= 1 and = 1, (c)k= 1 and = 1:5, (d)k= 2 and
= 0:5, (e)k= 2 and = 1, (f)k= 2 and = 1:5, (g)k= 4 and = 0:5, (h)k= 4
and = 1, (i)k= 4 and = 1:5, (j)k= 8 and = 0:5, (k)k= 8 and = 1, (l)k= 8
and = 1:5.28 David Carrera-Casado, Ramon Ferrer-i-Cancho
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
1.5
2
2.5μk = 1 α = 0.5(a)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 1 α = 1(b)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 1 α = 1.5(c)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 2 α = 0.5(d)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 2 α = 1(e)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 2 α = 1.5(f)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1μk = 4 α = 0.5(g)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 4 α = 1(h)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 4 α = 1.5(i)
2505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5μk = 8 α = 0.5(j)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2μk = 8 α = 1(k)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 8 α = 1.5(l)
Fig. 15 Summary of the boundaries between positive and negative values of when the
degrees of counterparts do not exceed one (Figs. 14, 24, 25, 26 and 27). Each curve shows
the points where = 0 (Eq. 12) as a function of andnfor distinct values of . Points are
restricted to combinations of parameters where kdoes not exceed the maximum (Eq. 17).
Each distinct heatmap corresponds to a distinct combination of kand . (a)k= 1 and
= 0:5, (b)k= 1 and = 1, (c)k= 1 and = 1:5, (d)k= 2 and = 0:5, (e)k= 2
and = 1, (f)k= 2 and = 1:5, (g)k= 4 and = 0:5, (h)k= 4 and = 1, (i)k= 4
and = 1:5, (j)k= 8 and = 0:5, (k)k= 8 and = 1, (l)k= 8 and = 1:5.The advent and fall of a vocabulary learning bias from communicative eciency 29
4 Discussion
4.1 Vocabulary learning
In previous research with = 0, we predicted that the vocabulary learning bias
(strategya) would be present provided that mutual information minimization is
not disabled (  > 0) (Ferrer-i-Cancho, 2017a) as show in Eq. 5. However, the
\decision" on whether assigning a new label to a linked or to an unlinked object
is in uenced by the age of a child and his/her degree of polylingualism. As for the
e ect of the latter, polylingual children tend to pick familiar objects more often
than monolingual children, violating mutual exclusivity. This has been found for
younger children below two years of age (17-22 months old in one study, 17-18
in another) (Houston-Price et al., 2010; Byers-Heinlein and Werker, 2013). From
three years onward, the di erence between polylinguals and monolinguals either
vanishes, namely both violate mutual exclusivity similarly (Nicoladis and Laurent,
2020; Frank and Poulin-Dubois, 2002), or polylingual children are still more willing
to accept lexical overlap (Kalashnikova et al., 2015). One possible explanation for
this phenomenon is the lexicon structure hypothesis (Byers-Heinlein and Werker,
2013), which suggests that children that already have many multiple-word-to-
single-object mappings may be more willing to suspend mutual exclusivity.
As for the e ect of age on monolingual children, the so-called mutual exclusivity
bias has been shown to appear at an early age and, as time goes on, it is more easily
suspended. Starting at 17 months old, children tend to look at a novel object rather
than a familiar one when presented with a new word while 16-month-olds do not
show a preference (Halberda, 2003). Interestingly, in the same study, 14-month-olds
systematically look at a familiar object instead of a newer one. Reliance on mutual
exclusivity is shown to improve between 18 and 30 months (Bion et al., 2013).
Starting at least at 24 months of age, children may suspend mutual exclusivity to
learn a second label for an object (Liittschwager and Markman, 1994). In a more
recent study, it has been shown that three year old children will suspend mutual
exclusivity if there are enough social cues present (Yildiz, 2020). Four to ve year
old children continue to apply mutual exclusivity to learn new words but are able
to apply it exibly, suspending it when given appropriate contextual information
(Kalashnikova et al., 2016) in order to associate multiple labels to the same familiar
object. As seen before, at 3 years of age both monolingual and polylingual children
have similar willingness to suspend mutual exclusivity (Nicoladis and Laurent,
2020; Frank and Poulin-Dubois, 2002), although polylinguals may still have a
greater tendency to accept multiple labels for the same object (Kalashnikova et al.,
2015).
Here we have made an important contribution with respect to the precursor
of the current model (Ferrer-i-Cancho, 2017a): we have shown that the bias is not
theoretically inevitable (even when  > 0) according a more realistic model. In
a more complex setting, research on deep neural networks has shed light on the
architectures, learning biases and pragmatic strategies that are required for the
vocabulary learning bias to emerge (e.g. Gandhi and Lake, 2020; Gulordava et al.,
2020). In section 3, we have discovered regions of the space of parameters where
strategyais not advantageous for two classes of skeleta. In the restrictive class,
where one where vertex degrees do no exceed one, as expected in the earliest stages
of vocabulary learning in children, we have unveiled the existence of a region of the30 David Carrera-Casado, Ramon Ferrer-i-Cancho
phase space where strategy ais not advantageous (Figs. 7 and 6). In the broader
class of skeleta where the degree of counterparts does not exceed one we have
found up to two distinct regions where ais not advantageous (Figs. 11 and 13).
Crucially, our model predicts that the bias should be lost in older children.
The argument is as follows. Suppose a child that has not learned a word yet.
Then his skeleton belongs to the class where vertex degrees do not exceed one.
Then suppose that the child learns a new word. It could be that he/she learns
it following strategy aorb. If he applies bthen the bias is gone at least for this
word. Let us suppose that the child learns words adhering to strategy afor as long
as possible. By doing this, he/she will increasing the number of links ( M) of the
skeleton keeping as invariant a one-to-one mapping between words and meanings
(Figs. 1 (c) and 2 (d)), which satis es that vertex degrees do not exceed one. Then
Figs. 7 and 8 predict that the longer the time strategy ais kept (when >0) the
larger the region of the phase space where ais not advantageous. Namely, as times
goes on, it will become increasingly more dicult to keep aas the best option.
Then it is not surprising that the bias weakens either in older children (e.g., Yildiz,
2020; Kalashnikova et al., 2016), as they are expected to have more links (larger M)
because of their continued accretion of new words (Saxton, 2010), or in polylinguals
(e.g., Nicoladis and Secco, 2000; Greene et al., 2013), where the mapping of words
into meanings combining all their languages, is expected to yield more links than
in monolinguals. Polylinguals make use of code-mixing to compensate for lexical
gaps, as reported for from one-year-olds onward (Nicoladis and Secco, 2000) as
well as in older children ( ve year olds) (Greene et al., 2013). As a result, the
bipartite skeleton of a polylingual integrates the words and association in all the
languages spoken and thus polylinguals are expected to have a larger value of M.
Children who know more translation equivalents (words from di erent languages
but with same meaning), adhere to mutual exclusivity less than other children
(Byers-Heinlein and Werker, 2013). Therefore, our theoretical framework provides
an explanation for the lexicon structure hypothesis (Byers-Heinlein and Werker,
2013), but shedding light on the possible origin of the mechanism, that is not the
fact that there are already synonyms but rather the large number of links (Fig.
8) as well as the capacity of words of higher degree to attract more meanings, a
consequence of Eq. 3 with >0 in the vocabulary learning process (Fig. 3). Recall
the stark contrast between Fig. 10 for = 1 and the Fig. 9 with = 0, where
such attraction e ect is missing. Our models o er a transparent theoretical tool
to understand the failure of deep neural networks to reproduce the vocabulary
learning bias (Gandhi and Lake, 2020): in its simpler form (vertex degrees do not
exceed one), whether it is due to an excessive (Fig. 7) or an excessive M(Fig.
8).
We have focused on the loss of the bias in older children. However, there is
evidence that the bias is missing initially in children, by the age of 14 months
(Halberda, 2003). We speculate that this could be related to very young children
having lower values of or larger values of as suggested by Figs. 7 and 6. This
issue should be the subject of future research. Methods to estimate andin real
speakers should be investigated.
Now we turn our attention to skeleta where only the degree of the counterparts
does not exceed one, that we believe to be more appropriate for older children.
Whereas,andMsuced for the exploration of the phase space when vertex
degrees do not exceed one, the exploration of that kind of skeleta involved manyThe advent and fall of a vocabulary learning bias from communicative eciency 31
parameters: ,,n,kand . The more general class exhibits behaviors that we
have already seen in the more restrictive class. While an increase in Mimplies a
widening of the region where ais not advantageous in the more restrictive class,
the more general class experiences an increase of Mwhennis increased but and
remain constant (Section 2.1.2). Consistently with the more restrictive class,
such increase of Mleads to a growth of the regions where ais not advantageous as
it can be seen in Figs. 16, 10, 17, 18 and 19 when selecting a column (thus xing
and) and moving from the top to the bottom increasing n. The challenge is
that may not remain constant in real children as they become older and how to
involve the remainder of the parameters in the argument. In fact, some of these
parameters are known to be correlated with child's age:
{ntends to increase over time in children, as children are learning new words
over time (Saxton, 2010). We assume that the loss of words can be neglected
in children.
{Mtends to increase over time in children. In this class of skeleta, the growth
ofMhas two sources: the learning of new words as well as the learning of new
meanings for existing words. We assume that the loss of connections can be
neglected in children.
{The ambiguity of the words that children learn over time tends to increase over
time (Casas et al., 2018). This does not imply that children are learning all
the meanings of the word according to some online dictionary but rather than
as times go on, children are able to handle words that have more meanings
according to adult standards.
{ remains stable over time or tends to decrease over time in children depending
on the individual (Baixeries et al., 2013; Zipf, 1949, Chapter IV).
For other parameters, we can just speculate on their evolution with child's age.
The growth of Mand the increase in the learning of ambiguous words over time
leads to expect that the maximum value of kwill be larger in older children. It
is hard to tell if older children will have a chance to encounter larger values of
k. We do not know the value of in real language but the higher diversity of
vocabulary in older children and adults (Baixeries et al., 2013) suggests that 
may tend to increase over time, because the lower the value of , the higher the
pressure to minimize the entropy of words (Eq. 4), namely the higher the force
towards uni cation in Zipf's view (Zipf, 1949). We do not know the real value of 
but a reasonable choice for adult language is = 1 (Ferrer-i-Cancho and Vitevitch,
2018).
Given the complexity of the space of parameters in the more general class
of skeleta where only the degrees of counterparts cannot exceed one, we cannot
make predictions that are as strong as those stemming from the class where vertex
degrees cannot exceed one. However, we wish to make some remarks suggesting
that a weakening of the vocabulary learning bias is also expected in older children
for this class (provided that >0). The combination of increasing nand a value
of that is stable over time suggests a weakening of the strategy aover time from
di erent perspectives
{Children evolve on a column of panels (constant ) of the matrix of panels
in Figs. 16, 10, 17, 18 and 19, moving from top (low n) to the bottom (large
n). That trajectory implies an increase of the size of the blue region, where
strategyais not advantageous.32 David Carrera-Casado, Ramon Ferrer-i-Cancho
{We do not know the temporal evolution of kbut oncekis xed, namely a
row of panels is selected in Figs. 20, 12, 21, 22 and 23, children evolve from
left (lower n) to right (higher n), which implies an increase of the size of the
blue region where strategy ais not advantageous as children become older.
{Within each panel in Figs. 24, 14, 25, 26 and 27, an increase of n, as a results
of vocabulary learning over time, implies a widening of the blue region.
In the preceding analysis we have assumed that remains stable over time. We
wish to speculate on the combination of increasing nand decreasing as time goes
on in certain children. In that case, children would evolve close to the diagonal of
the matrix of panels, starting from the right-upper corner (low n, high , panel
(c)) towards the lower-left corner (high n, low , panel (g)) in Figs. 16, 10, 17,
18 and 19, which implies an increase of the size of the blue region where strategy
ais not advantageous. Recall that we have argued that a combined increase of n
and decrease of is likely to lead in the long run to an increase of M(Fig. 4). We
suggest that the behavior "along the diagonal" of the matrix is an extension of
the weakening of the bias when Mis increased in the more restrictive class (Fig.
8).
In our exploration of the phase space for the class of the skeleta where the
degrees of counterparts do not exceed one, we assumed a right-truncated power-law
with two parameters, andnas a model for Zipf's rank-frequency law. However,
distributions giving a better t have been considered (Li et al., 2010) and function
(distribution) capturing the shape of the law of what Piotrowski called saturated
samples (Piotrowski and Spivak, 2007) should be considered in future research. Our
exploration of the phase space was limited by a brute force approach neglecting
the negative correlation between nand that is expected in children where
and time are negatively correlated: as children become older, nincreases as a
result of word learning (Saxton, 2010) but decreases (Baixeries et al., 2013). A
more powerful exploration of the phase space could be performed with a realistic
mathematical relationship of the expected correlation between nand , which
invites to empirical research. Finally, there might be deeper and better ways of
parameterizing the class of skeleta.
4.2 Biosemiotics
Biosemiotics is concerned about building bridges between biology, philosophy, lin-
guistics, and the communication sciences as announced in the front page of this
journal https://www.springer.com/journal/12304 . As far as we know, there is lit-
tle research on the vocabulary learning bias in other species. Its con rmation in
a domestic dog suggests that \ the perceptual and cognitive mechanisms that may
mediate the comprehension of speech were already in place before early humans began
to talk " (Kaminski et al., 2004). We hypothesize that the cost function
cap-
tures the essence of these mechanisms. A promising target for future research are
ape gestures, where there has been signi cant progress recently on their meaning
(Hobaiter and Byrne, 2014). As far as we know, there is no research on that bias
in other domains that also fall into the scope of biosemiotics, e.g., in unicellu-
lar organisms such as bacteria. Our research has established some mathematical
foundations for research on the accretion and interpretation of signs across theThe advent and fall of a vocabulary learning bias from communicative eciency 33
living world, not only among great apes, a key problem in research program of
biosemiotics (Kull, 2018).
The remainder of the discussion section is devoted to examine general chal-
lenges that are shared by biosemiotics and quantitative linguistics, a eld that, as
biosemiotics, aspires to contribute to develop a science beyond human communi-
cation.
4.3 Science and its method
It has been argued that a problem of research on the rank-frequency is law is the
The absence of novel predictions... which has led to a very peculiar situation in the
cognitive sciences, where we have a profusion of theories to explain an empirical phe-
nomenon, yet very little attempt to distinguish those theories using scienti c methods.
(Piantadosi, 2014). As we have already shown the predictive power of a model
whose original target was the rank-frequency laws here and in previous research
(Ferrer-i-Cancho, 2017a), we take this criticism as an invitation to re ect on sci-
ence and its method (Altmann, 1993; Bunge, 2001).
4.3.1 The generality of the patterns for theory construction
While in psycholinguisics and the cognitive sciences a major source of evidence are
often experiments involving restricted tasks or sophisticated statistical analyses
covering a handful of languages (typically English and a few other Indo-European
languages), quantitative linguistics aims to build theory departing from statistical
laws holding in a typologically wide range of languages (K ohler, 1987; Debowski,
2020) as re ected in Fig. 1. In addition, here we have investigated a speci c vocab-
ulary learning phenomenon that is, however, supported cross-linguistically (recall
Section 1). A recent review on the eciency of languages, only pays attention to
the law of abbreviation (Gibson et al., 2019) in contrast with the body of work
that has been developed in the last decades linking laws with optimization princi-
ples (Fig. 1), suggesting that this law is the only general pattern of languages that
is shaped by eciency or that linguistic laws are secondary for deep theorizing
on eciency. In other domains of the cognitive sciences, the importance of scaling
laws has been recognized (Chater and Brown, 1999; Kello et al., 2010; Baronchelli
et al., 2013).
4.3.2 Novel predictions
In section 4.1, we have checked predictions of our information theoretic framework
that matches knowledge on the vocabulary learning bias from past research. Our
theoretical framework allows the researcher to play the game of science in another
direction: use the relevant parameters to guide the design of new experiments with
children or adults where more detailed predictions of the theoretical framework
can be tested. For children who have about the same nand , and= 1, our
model predicts that strategy awill be discarded if (Fig. 10)
(1)is low and k(Fig.3) is large enough.
(2)is high and kis suciently low.34 David Carrera-Casado, Ramon Ferrer-i-Cancho
Interestingly, there is a red horizontal band in Fig. 10, and even for other values of
such that6= 1 but keeping >0 (Figs. 16, 17, 18, 19), indicating the existence
of some value of kor a range of kwhere strategy ais always advantageous (notice
however, that when >1, the band may become too narrow for an integer kto
t as suggested by Figs. 31, 32, 33 in Appendix D). Therefore the 1st concrete
prediction is that, for a given child, there is likely to be some range or value of k
where the bias (strategy a) will be observed. The 2nd concrete prediction that can
be made is on the conditions where the bias will not be observed. Although the
true value of is not known yet, previous theoretical research with = 0 suggests
that1=2 in real language (Ferrer-i-Cancho and Sole, 2003; Ferrer-i-Cancho,
2005b, 2006, 2005a), which would imply that real speakers should satisfy only (1).
Child or adult language researchers may design experiments where kis varied.
If successful, that would con rm the lexicon structure hypothesis (Byers-Heinlein
and Werker, 2013) but providing a deeper understanding. These are just examples
of experiments that could be carried out.
4.3.3 Towards a mathematical theory of language eciency
Our past and current research on the eciency are supported by a cost function
and a (analytical or numerical) mathematical procedure that links the minimiza-
tion of the cost function with the target phenomena, e.g., vocabulary learning,
as in research on how pressure for eciency gives rise to Zipf's rank-frequency
law, the law of abbreviation or Menzerath's law (Ferrer-i-Cancho, 2005b; Gusti-
son et al., 2016; Ferrer-i-Cancho et al., 2019). In the cognitive sciences, such a
cost function and the mathematical linking argument are sometimes missing (e.g.,
Piantadosi et al., 2011) and neglected when reviewing how languages are shaped
by eciency (Gibson et al., 2019). A truly quantitative approach in the context
of language eciency is two-fold: it has to comprise either a quantitative descrip-
tion of the data and a quantitative theorizing, i.e. it has to employ both statistical
methods of analysis and mathematical methods to de ne the cost and the how cost
minimization leads to the expected phenomena. Our framework relies on standard
information theory (Cover and Thomas, 2006) and its extensions (Ferrer-i-Cancho
et al., 2019; Debowski, 2020). The psychological foundations of the information
theoretic principles postulated in that framework and the relationships between
them have already been reviewed (Ferrer-i-Cancho, 2018). How the so-called noisy-
channel \theory" or noisy-channel hypothesis explains the results in (Piantadosi
et al., 2011), others reviewed recently (Gibson et al., 2019) or language laws in a
broad sense has not yet shown, to our knowledge, with detailed enough information
theory arguments. Furthermore, the major conclusions of the statistical analysis
of (Piantadosi et al., 2011) have recently been shown to change substantially after
improving the methods: e ects attributable to plain compression are stronger than
previously reported (Meylan and Griths, 2021). Theory is crucial to reduce false
positives and replication failures (Stewart and Plotkin, 2021). In addition, higher
order compression can explain more parsimoniously phenomena that are central
in noisy-channel \theorizing" (Ferrer-i-Cancho, 2017b).The advent and fall of a vocabulary learning bias from communicative eciency 35
4.3.4 The trade-o between parsimony and perfect t.
Our emphasis is on generality and parsimony over perfect t. Piantadosi (2014)
makes emphasis on what models of Zipf's rank-frequency law apparently do not
explain while our emphasis is on what the models do explain and the many predic-
tions they make (Table 1), in spite of their simple design. It is worth reminding a
big lesson from machine learning, i.e. a perfect t can be obtained simply by over-
tting the data and another big lesson from the philosophy of science to machine
learning and AI: sophisticated models (specially deep learning ones) are in most
cases black boxes that imitate complex behavior but neither explain nor yield un-
derstanding. In our theoretical framework, the principle of contrast (Clark, 1987)
or the mutual exclusivity bias (Markman and Wachtel, 1988; Merriman and Bow-
man, 1989) are not principles per se (or core principles) but predictions of the prin-
ciple of mutual information maximization involved in explaining the emergence of
Zipf's rank-frequency law (Ferrer-i-Cancho and Sole, 2003; Ferrer-i-Cancho, 2005b)
and word order patterns (Ferrer-i-Cancho, 2017b). Although there are computa-
tional models that are able to account for that vocabulary learning bias and other
phenomena (Frank et al., 2009; Gulordava et al., 2020), ours is much simpler,
transparent (in opposition to black box modeling) and to the best our knowledge,
the rst to predict that the bias will weaken over time providing a preliminary
understanding of why this could happen.
Acknowledgements We are grateful to two anonymous reviewers for their valuable feeback
and recommendations to improve the article. We are also grateful to A. Hern andez-Fern andez
and G. Boleda for their revision of the article and many recommendations to improve it. The
article has bene ted from discussions with T. Brochhagen, S. Semple and M. Gustison. Finally,
we thank C. Hobaiter for her advice and inspiration for future research. DCC and RFC are
supported by the grant TIN2017-89244-R from MINECO (Ministerio de Econom a, Industria
y Competitividad). RFC is also supported by the recognition 2017SGR-856 (MACDA) from
AGAUR (Generalitat de Catalunya).
References
Altmann G (1993) Science and linguistics. In: R ohler R, Rieger B (eds) Contribu-
tions to Quantitative Linguistics, Kluwer, Dordrecht, Boston, London, pp 3{10
Baixeries J, Elvev ag B, Ferrer-i-Cancho R (2013) The evolution of the exponent
of Zipf's law in language ontogeny. PLoS ONE 8(3):e53227
Baronchelli A, Ferrer-i-Cancho R, Pastor-Satorras R, Chatter N, Christiansen M
(2013) Networks in cognitive science. Trends in Cognitive Sciences 17:348{360
Bentz C, Ferrer-i-Cancho R (2016) Zipf's law of abbreviation as a language univer-
sal. In: Bentz C, J ager G, Yanovich I (eds) Proceedings of the Leiden Workshop
on Capturing Phylogenetic Algorithms for Linguistics, University of T ubingen
Bion RA, Borovsky A, Fernald A (2013) Fast mapping, slow learning: Disambigua-
tion of novel word-object mappings in relation to vocabulary learning at 18, 24,
and 30 months. Cognition 126(1):39{53, DOI 10.1016/j.cognition.2012.08.008
Brochhagen T (2021) Brief at the risk of being misunderstood: Consolidating
population-and individual-level tendencies. Computational Brain & Behavior
DOI 10.1007/s42113-021-00099-x
Bunge M (2001) La science, sa m ethode et sa philosophie. Vigdor36 David Carrera-Casado, Ramon Ferrer-i-Cancho
Byers-Heinlein K, Werker JF (2013) Lexicon structure and the disambiguation of
novel words: Evidence from bilingual infants. Cognition 128(3):407{416, DOI
10.1016/j.cognition.2013.05.010
Casas B, Catal a N, Ferrer-i-Cancho R, Hern andez-Fern andez A, Baixeries J (2018)
The polysemy of the words that children learn over time. Interaction Studies
19(3):389 { 426
Chater N, Brown GDA (1999) Scale invariance as a unifying psychological princi-
ple. Cognition 69:1999
Clark E (1987) The principle of contrast: A constraint on language acquisition. In:
MacWhinney B (ed) Mechanisms of language acquisition, Lawrence Erlbaum
Associates, Hillsdale, NJ
Clark E (1993) The lexicon in acquisition. Cambridge University Press
Cormen TH, Leiserson CE, Rivest RL (1990) Introduction to algorithms, The MIT
Press, Cambridge, MA, chap Chapter4. Summations
Cover TM, Thomas JA (2006) Elements of information theory. Wiley, New York,
2nd edition
Dangli L, Abazaj G (2009) Absolute versus relative synonymy. Linguistic and
Communicative Performance Journal 2:64{68
Deacon TW (1997) The Symbolic Species: the Co-evolution of Language and the
Brain. W. W. Norton & Company, New York
Deacon TW (2015) Steps to a science of biosemiotics. Green Letters 19(3):293{311,
DOI 10.1080/14688417.2015.1072948
Debowski L (2020) Information Theory Meets Power Laws: Stochastic Processes
and Language Models. Wiley, Hoboken, NJ
Eco U (1986) Semiotics and the philosophy of language. Indiana University Press,
Bloomington
Ellis SR, Hitchcock RJ (1986) The emergence of Zipf's law: spontaneous encoding
by users of a command language. IEEE Trans Syst Man Cyber 16(3):423{427
Eun-Nam S (2017) Word learning characteristics of 3-to 6-year-olds: Focused on
the mutual exclusivity assumption. Journal of speech-language & hearing disor-
ders 26(4):33{40
Ferrer-i-Cancho R (2005a) The variation of Zipf's law in human language. Euro-
pean Physical Journal B 44:249{257
Ferrer-i-Cancho R (2005b) Zipf's law from a communicative phase transition. Eu-
ropean Physical Journal B 47:449{457, DOI 10.1140/epjb/e2005-00340-y
Ferrer-i-Cancho R (2006) When language breaks into pieces. A con ict between
communication through isolated signals and language. Biosystems 84:242{253
Ferrer-i-Cancho R (2016a) Compression and the origins of Zipf's law for word
frequencies. Complexity 21:409{411
Ferrer-i-Cancho R (2016b) The meaning-frequency law in Zip an optimization
models of communication. Glottometrics 35:28{37
Ferrer-i-Cancho R (2017a) The optimality of attaching unlinked labels to unlinked
meanings. Glottometrics 36:1{16
Ferrer-i-Cancho R (2017b) The placement of the head that maximizes predictabil-
ity. An information theoretic approach. Glottometrics 39:38{71
Ferrer-i-Cancho R (2018) Optimization models of natural communication. Journal
of Quantitative Linguistics 25(3):207{237
Ferrer-i-Cancho R, D az-Guilera A (2007) The global minima of the communica-
tive energy of natural communication systems. Journal of Statistical Mechanics:The advent and fall of a vocabulary learning bias from communicative eciency 37
Theory and Experiment 06009(6), DOI 10.1088/1742-5468/2007/06/P06009
Ferrer-i-Cancho R, Sole RV (2003) Least e ort and the origins of scaling in human
language. Proceedings of the National Academy of Sciences of the United States
of America 100(3):788{791, DOI 10.1073/pnas.0335980100
Ferrer-i-Cancho R, Vitevitch M (2018) The origins of Zipf's meaning-frequency
law. Journal of the American Association for Information Science and Technol-
ogy 69(11):1369{1379
Ferrer-i-Cancho R, Riordan O, Bollob as B (2005) The consequences of Zipf's law
for syntax and symbolic reference. Proceedings of the Royal Society of London
B 272:561{565
Ferrer-i-Cancho R, Bentz C, Seguin C (2019) Optimal coding and the origins
of Zip an laws. Journal of Quantitative Linguistics p in press, DOI 10.1080/
09296174.2020.1778387
Frank I, Poulin-Dubois D (2002) Young monolingual and bilingual children's re-
sponses to violation of the mutual exclusivity principle. International Journal of
Bilingualism 6(2):125{146, DOI 10.1177/13670069020060020201
Frank MC, Goodman ND, Tenenbaum JB (2009) Using speakers' referential in-
tentions to model early cross-situational word learning. Psychological Science
20(5):578{585, DOI 10.1111/j.1467-9280.2009.02335.x
Fromkin V, Rodman R, Hyams N (2014) An Introduction to Language, 10th edn.
Wadsworth Publishing, Boston, MA
Futrell R (2020) https://twitter.com/rljfutrell/status/1275834876055351297
Gandhi K, Lake B (2020) Mutual exclusivity as a challenge for deep neural net-
works. In: Advances in Neural Information Processing Systems (NeurIPS), 33
Genty E, Zuberb uhler K (2014) Spatial reference in a bonobo gesture. Current
Biology 24(14):1601{1605, DOI https://doi.org/10.1016/j.cub.2014.05.065
Gibson E, Futrell R, Piantadosi S, Dautriche I, Mahowald K, Bergen L, Levy
R (2019) How eciency shapes human language. Trends in Cognitive Sciences
23:389{407
Greene KJ, Pe~ na ED, Bedore LM (2013) Lexical choice and language selection
in bilingual preschoolers. Child Language Teaching and Therapy 29(1):27{39,
DOI 10.1177/0265659012459743
Gulordava K, Brochhagen T, Boleda G (2020) Deep daxes: Mutual exclusivity
arises through both learning biases and pragmatic strategies in neural networks.
In: Proceedings of CogSci 2020, pp 2089{2095
Gustison ML, Semple S, Ferrer-i-Cancho R, Bergman T (2016) Gelada vocal se-
quences follow Menzerath's linguistic law. Proceedings of the National Academy
of Sciences USA 13(19):E2750{E2758, DOI 10.1073/pnas.1522072113
Halberda J (2003) The development of a word-learning strategy. Cognition
87(1):23{34, DOI 10.1016/S0010-0277(02)00186-5
Haryu E (1991) A developmental study of children's use of mutual exclusivity
and context to interpret novel words. The Japanese Journal of Educational
Psychology 39(1):11{20, DOI 10.5926/jjep1953.39.1 11
Hendrickson AT, Perfors A (2019) Cross-situational learning in a Zip an environ-
ment. Cognition 189(February):11{22, DOI 10.1016/j.cognition.2019.03.005
Hobaiter C, Byrne RW (2014) The meanings of chimpanzee gestures. Current
Biology 24:1596{1600
Houston-Price C, Caloghiris Z, Raviglione E (2010) Language experience shapes
the development of the mutual exclusivity bias. Infancy 15(2):125{150, DOI38 David Carrera-Casado, Ramon Ferrer-i-Cancho
10.1111/j.1532-7078.2009.00009.x
Hung WY, Patrycia F, Yow WQ (2015) Bilingual children weigh speaker's referen-
tial cues and word-learning heuristics di erently in di erent language contexts
when interpreting a speaker's intent. Frontiers in Psychology 6(JUN):1{9, DOI
10.3389/fpsyg.2015.00796
Hurford J (1989) Biological evolution of the Saussurean sign as a component of the
language acquisition device. Lingua 77:187{222, DOI doi:10.1016/0024-3481(89)
90015-6
Kalashnikova M, Mattock K, Monaghan P (2015) The e ects of linguistic expe-
rience on the exible use of mutual exclusivity in word learning. Bilingualism
18(4):626{638, DOI 10.1017/S1366728914000364
Kalashnikova M, Mattock K, Monaghan P (2016) Flexible use of mutual exclusiv-
ity in word learning. Language Learning and Development 12(1):79{91, DOI
10.1080/15475441.2015.1023443
Kalashnikova M, Oliveri A, Mattock K (2019) Acceptance of lexical overlap
by monolingual and bilingual toddlers. International Journal of Bilingualism
23(6):1517{1530, DOI 10.1177/1367006918808041
Kaminski J, Call J, Fischer J (2004) Word learning in a domestic dog: Evidence
for \fast mapping". Science 304(5677):1682{1683, DOI 10.1126/science.1097859
Kanwal J, Smith K, Culbertson J, Kirby S (2017) Zipf's law of abbreviation and
the principle of least e ort: Language users optimise a miniature lexicon for
ecient communication. Cognition 165:45{52
Kello CT, Brown GDA, Ferrer-i-Cancho R, Holden JG, Linkenkaer-Hansen K,
Rhodes T, Orden GCV (2010) Scaling laws in cognitive sciences. Trends in
cognitive sciences 14(5):223{232, DOI 10.1016/j.tics.2010.02.005
K ohler R (1987) System theoretical linguistics. Theor Linguist 14(2-3):241{257
Kull K (1999) Biosemiotics in the twentieth century: A view from biology. Semi-
otica 127(1/4):385{414
Kull K (2018) Choosing and learning: Semiosis means choice. Sign Systems Studies
46(4):452{466
Kull K (2020) Codes: Necessary, but not sucient for meaning-making. Construc-
tivist Foundations 15(2):137{139
Li W, Miramontes P, Cocho G (2010) Fitting ranked linguistic data with two-
parameter functions. Entropy 12(7):1743{1764
Liittschwager JC, Markman EM (1994) Sixteen-and 24-month-olds' use of mutual
exclusivity as a default assumption in second-label learning. Developmental Psy-
chology 30(6):955{968, DOI 10.1037/0012-1649.30.6.955
Lund K, Burgess C (1996) Producing high-dimensional semantic spaces from lex-
ical co-occurrence. Behavior Research Methods, Instruments, and Computers
28(2):203{208
Markman E, Wachtel G (1988) Children's use of mutual exclusivity to constrain
the meanings of words. Cognitive Psychology 20:121{157
Merriman WW, Bowman LL (1989) The mutual exclusivity bias in children's word
learning. Monographs of the Society for Research in Child Development 54:1-129
Meylan S, Griths T (2021) The challenges of large-scale, web-based language
datasets: Word length and predictability revisited. PsyArXiv DOI 10.31234/
osf.io/6832r, URL psyarxiv.com/6832r
Moore R (2014) Ape gestures: Interpreting chimpanzee and bonobo minds. Current
Biology 24(14):R645{R647, DOI 10.1016/j.cub.2014.05.072The advent and fall of a vocabulary learning bias from communicative eciency 39
Nicoladis E, Laurent A (2020) When knowing only one word for \car" leads to
weak application of mutual exclusivity. Cognition 196(February 2019):104087,
DOI 10.1016/j.cognition.2019.104087
Nicoladis E, Secco G (2000) The role of a child's productive vocabulary in the
language choice of a bilingual family. First Language 20(58):003{28, DOI 10.
1177/014272370002005801
Piantadosi S (2014) Zipf's law in natural language: a critical review and future
directions. Psych onomic Bulletin and Review 21:1112{1130
Piantadosi ST, Tily H, Gibson E (2011) Word lengths are optimized for ecient
communication. Proceedings of the National Academy of Sciences 108(9):3526{
3529
Piotrowski RG, Spivak DL (2007) Linguistic disorders and pathologies: synergetic
aspects. In: Grzybek P, K ohler R (eds) Exact methods in the study of language
and text. To honor Gabriel Altmann, Gruyter, Berlin, pp 545{554
Pulvermuller F (2001) Brain re ections of words and their meaning. Trends in
Cognitive Sciences 5(12):517{524
Pulverm uller F (2013) How neurons make meaning: brain mechanisms for embod-
ied and abstract-symbolic semantics. Trends in Cognitive Sciences 17(9):458{
470, DOI https://doi.org/10.1016/j.tics.2013.06.004
Saxton M (2010) Child language. Acquisition and development, SAGE, Los An-
geles, chap 6. The developing lexicon: what's in a name?, pp 133{158
Steels L (1996) The spontaneous self-organization of an adaptive language. Ma-
chine Intelligence 15:205{224
Stewart AJ, Plotkin JB (2021) The natural selection of good science. Nature Hu-
man Behaviour DOI 10.1038/s41562-021-01111-x
Yildiz M (2020) Con icting nature of social-pragmatic cues with mutual exclusiv-
ity regarding three-year-olds' label-referent mappings. Psychology of Language
and Communication 24(1):124{141, DOI 10.2478/plc-2020-0008
Yurovsky D, Yu C (2008) Mutual exclusivity in crosssituational statistical learning.
Proceedings of the annual meeting of the cognitive science society pp 715{720
Zaslavsky N, Kemp C, Regier T, Tishby N (2018) Ecient compression in color
naming and its evolution. Proceedings of the National Academy of Sciences
115(31):7937{7942, DOI 10.1073/pnas.1800521115
Zaslavsky N, Maldonado M, Culbertson J (2021) Let's talk (eciently) about us:
Person systems achieve near-optimal compression. PsyArXiv DOI 10.31234/osf.
io/kcu27, URL psyarxiv.com/kcu27
Zipf GK (1945) The meaning-frequency relationship of words. Journal of General
Psychology 33:251{266
Zipf GK (1949) Human behaviour and the principle of least e ort. Addison-Wesley,
Cambridge (MA), USA
A The mathematical model in detail
This appendix is organized as follows. Section A.1 details the expressions for probabilities and
entropies introduced in Section 2. Section A.2 addresses the general problem of the dynamic
calculation of
(Eq. 8) when a cell of the adjacency matrix is mutated, deriving the formulae
to update these entropies once a single mutation has taken place. Finally, Section A.3 applies
these formulae to derive the expressions for presented in Section 2.1.40 David Carrera-Casado, Ramon Ferrer-i-Cancho
A.1 Probabilities and entropies
In section 2, we obtained an expression for the joint probability of a form and a counterpart (Eq.
6) and the corresponding normalization factor, M(Eq. 7). Notice that M0is the number of
edges of the bipartite graph. i.e. M=M0. To ease the derivation of the marginal probabilities,
we de ne
;i=mX
j=1aij!
j(18)
!;i=nX
i=1aij
i: (19)
Notice that ;iand!;jshould not be confused with iand!i(the degree of the form iand
of the counterpart jrespectively). Indeed, i=0;iand!j=!0;j. From the joint probability
(Eq. 6), we obtain the marginal probabilities
p(si) =mX
j=1p(si;rj)
=
i;i
M(20)
p(rj) =nX
i=1p(si;rj)
=!
j!;j
M: (21)
To obtain expressions for the entropies, we use the rule
X
ixi
Tlogxi
T= logT1
TX
ixilogxi; (22)
which holds whenP
ixi=T.
We can now derive the entropies H(S;R),H(S) andH(R). Applying Eq. 6 to
H(S;R) =nX
i=1mX
j=1p(si;rj) logp(si;rj);
we obtain
H(S;R) = logM
MnX
i=1mX
j=1aij(i!j)log(i!j):
Applying Eq. 20 and the rule in Eq. 22,
H(S) =nX
i=1p(si) logp(si)
becomes
H(S) = logM1
MnX
i=1

i;i
log

i;i
:
By symmetry, equivalent formulae for H(R) can be derived easily using Eq. 21, obtaining
H(R) = logM1
MmX
j=1
!
j!;j
log
!
j!;j
:The advent and fall of a vocabulary learning bias from communicative eciency 41
Interestingly, when = 0, the entropies simplify as
H(S;R) = logM0
H(S) = logM01
M0nX
i=1ilogi
H(R) = logM01
M0mX
j=1!ilog!i
as expected from previous work (Ferrer-i-Cancho, 2005b). Given the formulae for H(S;R),
H(S) andH(R) above, the calculation of
() (Eq. 9) is straightforward.
A.2 Change in entropies after a single mutation in the adjacency matrix
Here we investigate a general problem: the change in the entropies needed to calculate
when
there is a single mutation in the cell ( i;j) of the adjacency matrix, i.e. when a link between
a formiand a counterpart jis added (aijbecomes 1) or deleted ( aijbecomes 0). The goal
of this analysis is to provide the mathematical foundations for research on the evolution of
communication and in particular, the problem of learning of a new word, i.e. linking a form
that was previously unlinked (Appendix A.3), which is a particular case of mutation where
aij= 0 andi= 0 before the mutation ( aij= 1 andi= 1 after the mutation).
Firstly, we express the entropies compactly as
H(S;R) = logM
MX(S;R) (23)
H(S) = logM1
MX(S) (24)
H(R) = logM1
MX(R) (25)
with
X(S;R) =X
(i;j)2Ex(si;rj)
X(S) =nX
i=1x(si)
X(R) =mX
j=1x(rj) (26)
x(si;rj) = (i!j)log(i!j) (27)
x(si) =

i;i
log

i;i
(28)
x(rj) =
!
j!;j
log
!
j!;j
: (29)
We will use a prime mark to indicate the new value of a certain measure once a mutation has
been produced in the adjacency matrix. Suppose that aijmutates. Then
a0
ij= 1aij
0
i=i+ (1)aij (30)
!0
j=!j+ (1)aij: (31)
We de neS(i) as the set of neighbors of siin the graph and, similarly, R(j) as the set of
neighbors of rjin the graph. Then 0
;kcan only change if k=iork2R(j) (recall Eq. 18)42 David Carrera-Casado, Ramon Ferrer-i-Cancho
and!0
;lcan only change if l=jorl2R(i) (Eq. 19). Then, for any ksuch that 1kn,
we have that
0
;k=8
><
>:;kaij!
j+ (1aij)!0
jifk=i
;k!
j+!0
jifk2R(j)
;k otherwise:(32)
Likewise, for any lsuch that 1lm, we have that
!0
;l=8
<
:!;laij
i+ (1aij)0
iifl=j
!;l
i+0
iifl2S(i)
!;l otherwise:(33)
We then aim to calculate M0
andX0(S;R) fromMandX(S;R) (Eq. 7 and Eq. 23) respec-
tively. Accordingly, we focus on the pairs ( sk;rl), shortly (k;l), such that 0
k!0
l=k!lmay
not hold. These pairs belong to E(i;j)[(i;j), whereE(i;j) is the set of edges having sior
rjat one of the ends. That is, E(i;j) is the set of edges of the form ( i;l) wherel2S(i) or
(k;j) wherek2R(j). Then the new value of Mwill be
M0
=M2
4X
(k;l)2E(i;j)(k!l)3
5aij(i!j)
+2
4X
(k;l)2E(i;j)(0
k!0
l)3
5+ (1aij)(0
i!0
j):(34)
Similarly, the new value of X(S;R) will be
X0(S;R) =X(S;R)2
4X
(k;l)2E(i;j)x(sk;rl)3
5aijx(si;rj)
+2
4X
(k;l)2E(i;j)x0(sk;rl)3
5+ (1aij)x0(si;rj):(35)
x0(si;rj) can be obtained by applying 0
iand!0
j(Eqs. 30 and 31) to x(si;rj) (Eq. 27). The
value ofH0(S;R) is then obtained applying M0
(Eq. 34) and X(S;R)0(Eq. 35) to H(S;R)
(Eq. 23).
As forH0(S), notice that x0(sk) can only di er from x(sk) if0
kand0
;kchange, namely
whenk=iork2R(j). Therefore
X0(S) =X(S)2
4X
k2R(j)x(sk)3
5aijx(si) +2
4X
k2R(j)x0(sk)3
5+ (1aij)x0(si):(36)
Similarly,x0(si) can be obtained by applying i(Eq. 30) and ;i(Eq. 32) to x(si) (Eq 28).
ThenH0(S) is obtained by applying MandX0(S) (Eqs. 34 and 36) to H(S) (Eq. 24). By
symmetry,
X0(R) =X(R)2
4X
l2S(i)x(rl)3
5aijx(rj) +2
4X
l2S(i)x0(rl)3
5+ (1aij)x0(rj); (37)
wherex0(rj) andH0(R) are obtained similarly, applying !j(Eq. 33)x(rj) (Eq. 29) and nally
H(R) (Eq. 25).The advent and fall of a vocabulary learning bias from communicative eciency 43
A.3 Derivation of 
Following from the previous sections, we set o to obtain expressions for for each of the
skeleton classes we set out to study. As before, we denote the value of a variable after applying
either strategy with a prime mark, meaning that it is a modi ed value after a mutation in the
adjacency matrix. We also use a subindex aorbto indicate the vocabulary learning strategy
corresponding to the mutation. A value without prime mark then denotes the state of that
variable before applying either strategy.
Firstly, we aim to obtain an expression for that depends on the new values of the
entropies after either strategy aorbhas been chosen. Combining () (Eq. 10) with
()
(Eq. 9), one obtains
() = (12)(H0
a(S)H0
b(S))(H0
a(R)H0
b(R)) +(H0
a(S;R)H0
b(S;R)):
The application of H(S;R) (Eq. 23), H(S) (Eq. 24) and H(R) (Eq. 25), yields
() = (12) logM0
a
M0
b1
M0
aM0
b
(12)X(S)X(R)+X(S;R)
(38)
with
X(S)=M0
bX0
a(S)M0
aX0
b(S)
X(R)=M0
bX0
a(R)M0
aX0
b(R)
X(S;R)=M0
bX0
a(S;R)M0
aX0
b(S;R):
Now we nd expressions for M0
a,X0
a(S;R),X0
a(S),X0
a(R),M0
b,X0
b(S;R),X0
b(S),X0
b(R).
To obtain generic expressions for M0
,X0(S;R),X0(S) andX0(R) via Eqs. 34, 35, 36 and 37,
we de ne mathematically the state of the bipartite matrix before and after applying either
strategyaorbwith the following restrictions
{aija=aijb= 0. Formiand counterpart jare initially unconnected.
{ia=ib= 0. Formihas initially no connections.
{0
ia=0
ib= 1. Formiwill have one connection afterwards.
{!ja= 0. In case a, counterpart jis initially disconnected.
{!jb=!j>0. In caseb, counterpart jhas initially at least one connection.
{!0
ja= 1. In case a, counterpart jwill have one connection afterwards.
{!0
jb=!j+ 1. In case b, counterpart jwill have one more connection afterwards.
{Sa(i) =Sb(i) =?. Formihas initially no neighbors.
{Ra(j) =?. In casea, counterpart jhas initially no neighbors.
{Rb(j)6=?. In caseb, counterpart jhas initially some neighbors.
{Ea(i;j) =?. In casea, there are no links with iorjat one of their ends.
{Eb(i;j) =f(k;j)jk2R(j)g. In caseb, there are no links with iat one of their ends, only
withj.
We can apply these restrictions to x(si;rj),x(si) andx(rj) (Eqs. 27, 28 and 29) to obtain
expressions of x0
a(si),x0
b(si),x0
b(rj) andx0
b(si;rj) that depend only on the initial values of !j
and!;j
x0(si) =!0
jlog!0
j
x0
a(si) = 0 (39)
x0
a(rj) = 0 (40)
x0
b(si) =(!j+ 1)log(!j+ 1) (41)
x0
b(rj) = (!j+ 1)(!;j+ 1) log((!j+ 1)(!;j+ 1)) (42)
x0
a(si;rJ) = 0 (43)
x0
b(si;rj) = (!j+ 1)log(!j+ 1): (44)44 David Carrera-Casado, Ramon Ferrer-i-Cancho
Additionally, for any forms sksuch thatk2Rb(j) (that is, for every form that counterpart
jis connected to), we can also obtain expressions that depend only on the initial values of !j,
!;j,kand;kusing the same restrictions and equations
xb(sk;rj) =!
j(
klogk) + (!
jlog!j)
k(45)
x0
b(sk;rj) = (!j+ 1)(
klogk) +h
(!j+ 1)log(!j+ 1)i

k(46)
x0
b(sk) =n

k;k+
kh
!
j+ (!j+ 1)io
log"
(
k;k)
;k!
j+ (!j+ 1)
;k!#
=sb(sk) +h
(!j+ 1)!
ji

klogn

kh
;k!
j+ (!j+ 1)io
+
k;klog
;k!
j+ (!j+ 1)
;k!
:(47)
Applying the restrictions to M0
(Eq. 34), we can also obtain an expression that depends only
on some initial values
M0
a=M+ 1 (48)
M0
b=M+h
(!j+ 1)!
ji
!;j+ (!j+ 1): (49)
Applying now the expressions for x0
a(si;rj) (Eq. 43), x0
b(si;rj) (Eq. 44), xb(sk;rj) (Eq. 45)
andx0
b(sk;rj) (Eq. 46) to X0(S;R) (Eq. 35), along with the restrictions, we obtain
X0
a(S;R) =X(S;R) (50)
X0
b(S;R) =X(S;R) +h
(!j+ 1)!
jiX
k2R(j)
klogk
+!;jh
(!j+ 1)log(!j+ 1)!
jlog(!j)i
+ (!j+ 1)log(!j+ 1):(51)
Similarly, we apply x0
a(si) (Eq. 39), x0
b(si) (Eq. 41) and x0
b(sk) (Eq. 47) to X0(S) (Eq. 36) as
well as the restrictions and obtain
X0
a(S) =X(S) (52)
X0
b(S) =X(S) +(!j+ 1)log(!j+ 1)
+h
(!j+ 1)!
jiX
k2R(j)
klogn

kh
;k!
j+ (!j+ 1)io
+X
k2R(j)
k;klog
;k!
j+ (!j+ 1)
;k!
:(53)
We applyx0
a(rj) (Eq. 40) and x0
b(rj) (Eq. 42) to X0(R) (Eq. 37) along with the restrictions
and obtain
X0
a(R) =X(R) (54)
X0
b(R) =X(R)!
j!;jlog(!
j!;j)
+ (!j+ 1)(!;j+ 1) logh
(!j+ 1)(!;j+ 1)i
:(55)
At this point we could attempt to build an expression for for the most general case. However,
this expression would be extremely complex. Instead, we study the expression of in three
simplifying conditions: the case = 0 and the two classes of skeleta.The advent and fall of a vocabulary learning bias from communicative eciency 45
A.3.1 The case = 0
The condition = 0 corresponds to a model that is a precursor of the current model Ferrer-
i-Cancho (2017a), and that we use to ensure our that our general expressions are correct. We
apply= 0 to the expressions in Section A.3. M0
aandM0
b(Eqs. 48 and 49) both simplify as
M0
a=M0
b=M+ 1: (56)
X0
a(S;R) andX0
b(S;R) (Eqs. 50 and 51) simplify as
X0
a(S;R) =X(S;R) (57)
X0
b(S;R) =X(S;R) + (!j+ 1) log(!j+ 1)!jlog(!j): (58)
X0
a(S) andX0
b(S) (Eqs. 52 and 53) both simplify as
X0
a(S) =X0
b(S) =X(S): (59)
X0
a(R) andX0
b(R) (Eqs. 54 and 55) simplify as
X0
a(R) =X(R) (60)
X0
b(R) =X(R)!jlog(!j) + (!j+ 1) log(!j+ 1): (61)
The application of Eqs. 56, 57, 58, 59, 60 and 61 into the expression of (Eq. 38) results in
the expression for (Eq. 5) presented in Section 1.
A.3.2 Counterpart degrees do not exceed one
In this case we assume that !j2f0;1gfor everyrjand further simplify the expressions from
A.3 under this assumption. This is the most relaxed of the conditions and so these expressions
remain fairly complex.
M0
aandM0
b(Eqs. 48 and 49) simplify as
M0
a=M+ 1 (62)
M0
b=M+ (21)
k+ 2(63)
with
M=nX
i=1+1
i;
X0
a(S;R) andX0
b(S;R) (Eqs. 50 and 51) simplify as
X0
a(S;R) =X(S;R) (64)
X0
b(S;R) =X(S;R) + (21)
klogk+ (
k+ 1)2log 2 (65)
with
X(S;R) =nX
i=1+1
ilogi: (66)
X0
a(S) andX0
b(S) (Eqs. 52 and 53) simplify as
X0
a(S) =X(S) (67)
X0
b(S) =X(S) + (21)
klogh

k(k1 + 2)i
++1
klogk1 + 2
k
+2log(2)(68)46 David Carrera-Casado, Ramon Ferrer-i-Cancho
with
X(S) =nX
i=1
i;ilog(
i;i)
=nX
i=1
iilog(
ii)
= (+ 1)nX
i=1+1
ilogi
= (+ 1)X(S;R):
X0
a(R) andX0
b(R) (Eqs. 54 and 55) simplify as
X0
a(R) =X(R) (69)
X0
b(R) =X(R)
klog(k) + 2(
k+ 1) logh
2(
k+ 1)i
(70)
with
X(R) =X(S;R): (71)
The previous result on X(R) deserves a brief explanation as it is not straightforward. Firstly,
we apply the de nition of x(rj) (Eq. 29) to that of X(R) (Eq. 26)
X(R) =mX
j=1!
j!;jlog(!
j!;j):
As counterpart degrees are one, !j= 1 and!;j=
i j, wherei jis used to indicate that
we refer to the form ithat the counterpart jis connected to (see Eq. 19). That leads to
X(R) =mX
j=1
i jlog(i j):
In order to change the summation over each j(every counterpart) to a summation over each
i(every form) we must take into account that when summing over j, we accounted for each
formia total ofitimes. Therefore we need to multiply by iin order for the summations to
be equivalent, as otherwise we would be accounting for each form ionly once. This leads to
X(R) =nX
i=1+1
ilogi
and eventually Eq. 71 thanks to Eq. 66.
The application of Eqs. 62, 63, 64, 65, 67, 68, 69 and 70 into the expression of (Eq. 38)
results in the expression for (Eq. 12) presented in Section 1. If we apply the two extreme
values of, i.e.= 0 and= 1, to that equation, we obtain the following expressions
(0) = log
M+ 1
M+ (21)
k+ 2!
+1
M+ (21)
k+ 2(
2
klog(k)
h
(+ 1)X(S;R)(21)(
k+ 1)
M+ 12log(2)
+
kh
log(k)(k+)(k1 + 2) log(k1 + 2)ii)The advent and fall of a vocabulary learning bias from communicative eciency 47
(1) =log
M+ 1
M+ (21)
k+ 2!
1
M+ (21)
k+ 2(
(
k+ 1)2log(
k+ 1)
h
(+ 1)X(S;R)(21)(
k+ 1)
M+ 12log(2)
+
kh
log(k)(k+)(k1 + 2) log(k1 + 2)ii)
:
A.3.3 Vertex degrees do not exceed one
As seen in Section 2.1, for this class we are working under the two conditions that !j2f0;1g
for everyrjandi2f0;1gfor everysi. We can simplify the expressions from A.3. M0
aand
M0
b(Eqs. 62 and 63) simplify as
M0
a=M+ 1 (72)
M0
b=M+ 2+11; (73)
whereM=M0=M, the number of edges in the bipartite graph. X0
a(S;R) andX0
b(S;R)
(Eqs. 64 and 65) simplify as
X0
a(S;R) = 0 (74)
X0
b(S;R) = 2+1log 2: (75)
X0
a(S) andX0
b(S) (Eqs. 67 and 68) simplify as
X0
a(S) = 0 (76)
X0
b(S) =2+1log 2: (77)
X0
a(R) andX0
b(R) (Eqs. 69 and 70) simplify as
X0
a(R) = 0 (78)
X0
b(R) = (+ 1)2+1log 2: (79)
Combining Eqs. 72, 73, 74, 75, 76, 77, 78, 79 into the equation for (Eq. 38) results in the
expression for (Eq.11) presented in Section 1. When the extreme values, i.e. = 0 and
= 1, are applied to this equation, we obtain the following expressions
(0) =log
1 +2(21)
M+ 1
+2+1log(2)
M+ 2+11
(1) = log
1 +2(21)
M+ 1
2+1(+ 1) log(2)
M+ 2+11:
B Form degrees and number of links
Here we develop the implications of Eq. 15 with n1= 1 andn= 0. Imposing n1= 1,
we get
c= (n1):
Inserting the previous results into the de nition of p(si) when!j1, we have that
p(si) =1
M+1
i
=c0i ;48 David Carrera-Casado, Ramon Ferrer-i-Cancho
with
=(+ 1)
c0=(n1)
M:
A continuous approximation to vertex degrees and the number of edges gives
M=nX
i=1i
=cn1X
i=1i
= (n1)n1X
i=1i:
Thanks to well-known integral bounds (Cormen et al., 1990, pp. 50-51), we have that
Zn
1idin1X
i=1i1 +Zn1
1idi:
as0 by de nition. When = 1, one obtains
lognn1X
i=1i11 + log(n1):
When6= 1, one obtains
1
1
1n1
n1X
i=1i1 +1
1
1(n1)1
:
Combining the results above, one obtains
(n1) lognM(n1)[1 + log(n1)]
for= 1 and
(n1)1
1
1n1
M(n1)
1 +1
1
1(n1)1
for6= 1.
C Complementary heatmaps for other values of 
In Section 3, heatmaps were used to analyze takes for distinct sets of parameters. For the
class of skeleta where counterpart degrees do not exceed one, only heatmaps corresponding to
= 0 (Fig. 9) and = 1 (Figs. 10, 12 and 14) were presented. The summary gures presented
in that same section (Figs. 11, 13 and 15) already displayed the boundaries between positive
and negative values of for the whole range of values of . Heatmaps for the remainder of
values ofare presented next.
Heatmaps of as a function of andkFigures 16, 17, 18 and 19 vary kon they-axis
(while keeping on thex-axis, as with all others) and correspond to values of = 0:5,= 1:5,
= 2 and= 2:5 respectively.
Heatmaps of as a function of and Figures 20, 21, 22 and 23 vary on they-axis
and correspond to values of = 0:5,= 1:5,= 2 and= 2:5 respectively.
Heatmaps of as a function of andnFigures 24, 25, 26 and 27 vary non they-axis
and correspond to values of = 0:5,= 1:5,= 2 and= 2:5 respectively.The advent and fall of a vocabulary learning bias from communicative eciency 49
1.21.51.82.1
0.00 0.25 0.50 0.75 1.00
λμk
-0.16-0.12-0.08-0.04Δ < 0
0.0040.0080.0120.016Δ ≥ 0φ = 0.5 α = 0.5 n = 10(a)
1234
0.00 0.25 0.50 0.75 1.00
λμk
-0.09-0.06-0.03Δ < 0
0.00250.00500.00750.01000.0125Δ ≥ 0φ = 0.5 α = 1 n = 10(b)
2.55.07.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.06-0.04-0.02Δ < 0
0.0010.0020.0030.004Δ ≥ 0φ = 0.5 α = 1.5 n = 10(c)
1234
0.00 0.25 0.50 0.75 1.00
λμk
-0.020-0.015-0.010-0.005Δ < 0
0.0020.0040.006Δ ≥ 0φ = 0.5 α = 0.5 n = 100(d)
05101520
0.00 0.25 0.50 0.75 1.00
λμk
-0.0125-0.0100-0.0075-0.0050-0.0025Δ < 0
0.0010.0020.0030.0040.005Δ ≥ 0φ = 0.5 α = 1 n = 100(e)
0255075100
0.00 0.25 0.50 0.75 1.00
λμk
-0.003-0.002-0.001Δ < 0
0.00050.00100.0015Δ ≥ 0φ = 0.5 α = 1.5 n = 100(f)
2.55.07.510.0
0.00 0.25 0.50 0.75 1.00
λμk
-0.003-0.002-0.001Δ < 0
0.00040.00080.0012Δ ≥ 0φ = 0.5 α = 0.5 n = 1000(g)
0255075100
0.00 0.25 0.50 0.75 1.00
λμk
-0.0020-0.0015-0.0010-0.0005Δ < 0
0.00040.00080.0012Δ ≥ 0φ = 0.5 α = 1 n = 1000(h)
02505007501000
0.00 0.25 0.50 0.75 1.00
λμk
-3e-04-2e-04-1e-04Δ < 0
1e-042e-04Δ ≥ 0φ = 0.5 α = 1.5 n = 1000(i)
Fig. 16 Same as in Fig. 9 but with = 0:5.50 David Carrera-Casado, Ramon Ferrer-i-Cancho
1.01.21.4
0.00 0.25 0.50 0.75 1.00
λμk
-0.4-0.3-0.2-0.1Δ < 0
0.050.100.15Δ ≥ 0φ = 1.5 α = 0.5 n = 10(a)
1.01.52.0
0.00 0.25 0.50 0.75 1.00
λμk
-0.3-0.2-0.1Δ < 0
0.040.080.120.16Δ ≥ 0φ = 1.5 α = 1 n = 10(b)
123
0.00 0.25 0.50 0.75 1.00
λμk
-0.2-0.1Δ < 0
0.050.10Δ ≥ 0φ = 1.5 α = 1.5 n = 10(c)
1.01.52.02.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.100-0.075-0.050-0.025Δ < 0
0.020.040.06Δ ≥ 0φ = 1.5 α = 0.5 n = 100(d)
246
0.00 0.25 0.50 0.75 1.00
λμk
-0.125-0.100-0.075-0.050-0.025Δ < 0
0.0250.0500.0750.100Δ ≥ 0φ = 1.5 α = 1 n = 100(e)
481216
0.00 0.25 0.50 0.75 1.00
λμk
-0.06-0.04-0.02Δ < 0
0.020.040.06Δ ≥ 0φ = 1.5 α = 1.5 n = 100(f)
1234
0.00 0.25 0.50 0.75 1.00
λμk
-0.020-0.015-0.010-0.005Δ < 0
0.0050.0100.015Δ ≥ 0φ = 1.5 α = 0.5 n = 1000(g)
481216
0.00 0.25 0.50 0.75 1.00
λμk
-0.05-0.04-0.03-0.02-0.01Δ < 0
0.010.020.030.04Δ ≥ 0φ = 1.5 α = 1 n = 1000(h)
0204060
0.00 0.25 0.50 0.75 1.00
λμk
-0.020-0.015-0.010-0.005Δ < 0
0.0050.0100.0150.020Δ ≥ 0φ = 1.5 α = 1.5 n = 1000(i)
Fig. 17 Same as in Fig. 9 but with = 1:5.The advent and fall of a vocabulary learning bias from communicative eciency 51
1.01.11.21.31.4
0.00 0.25 0.50 0.75 1.00
λμk
-0.5-0.4-0.3-0.2-0.1Δ < 0
0.10.2Δ ≥ 0φ = 2 α = 0.5 n = 10(a)
1.21.51.82.1
0.00 0.25 0.50 0.75 1.00
λμk
-0.5-0.4-0.3-0.2-0.1Δ < 0
0.10.20.3Δ ≥ 0φ = 2 α = 1 n = 10(b)
1.01.52.02.53.0
0.00 0.25 0.50 0.75 1.00
λμk
-0.4-0.3-0.2-0.1Δ < 0
0.10.2Δ ≥ 0φ = 2 α = 1.5 n = 10(c)
1.001.251.501.752.00
0.00 0.25 0.50 0.75 1.00
λμk
-0.15-0.10-0.05Δ < 0
0.050.10Δ ≥ 0φ = 2 α = 0.5 n = 100(d)
1234
0.00 0.25 0.50 0.75 1.00
λμk
-0.25-0.20-0.15-0.10-0.05Δ < 0
0.050.100.150.20Δ ≥ 0φ = 2 α = 1 n = 100(e)
2.55.07.510.0
0.00 0.25 0.50 0.75 1.00
λμk
-0.15-0.10-0.05Δ < 0
0.050.100.15Δ ≥ 0φ = 2 α = 1.5 n = 100(f)
1.01.52.02.53.0
0.00 0.25 0.50 0.75 1.00
λμk
-0.05-0.04-0.03-0.02-0.01Δ < 0
0.010.020.030.04Δ ≥ 0φ = 2 α = 0.5 n = 1000(g)
2.55.07.510.0
0.00 0.25 0.50 0.75 1.00
λμk
-0.125-0.100-0.075-0.050-0.025Δ < 0
0.0250.0500.0750.1000.125Δ ≥ 0φ = 2 α = 1 n = 1000(h)
0102030
0.00 0.25 0.50 0.75 1.00
λμk
-0.06-0.04-0.02Δ < 0
0.020.040.06Δ ≥ 0φ = 2 α = 1.5 n = 1000(i)
Fig. 18 Same as in Fig. 9 but with = 2.52 David Carrera-Casado, Ramon Ferrer-i-Cancho
1.01.11.21.3
0.00 0.25 0.50 0.75 1.00
λμk
-0.8-0.6-0.4-0.2Δ < 0
0.10.20.30.4Δ ≥ 0φ = 2.5 α = 0.5 n = 10(a)
1.001.251.501.75
0.00 0.25 0.50 0.75 1.00
λμk
-0.6-0.4-0.2Δ < 0
0.10.20.30.4Δ ≥ 0φ = 2.5 α = 1 n = 10(b)
1.01.52.02.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.6-0.4-0.2Δ < 0
0.10.20.30.4Δ ≥ 0φ = 2.5 α = 1.5 n = 10(c)
1.001.251.501.75
0.00 0.25 0.50 0.75 1.00
λμk
-0.3-0.2-0.1Δ < 0
0.050.100.150.200.25Δ ≥ 0φ = 2.5 α = 0.5 n = 100(d)
123
0.00 0.25 0.50 0.75 1.00
λμk
-0.4-0.3-0.2-0.1Δ < 0
0.10.20.30.4Δ ≥ 0φ = 2.5 α = 1 n = 100(e)
246
0.00 0.25 0.50 0.75 1.00
λμk
-0.3-0.2-0.1Δ < 0
0.10.20.3Δ ≥ 0φ = 2.5 α = 1.5 n = 100(f)
1.01.52.02.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.075-0.050-0.025Δ < 0
0.020.040.060.08Δ ≥ 0φ = 2.5 α = 0.5 n = 1000(g)
246
0.00 0.25 0.50 0.75 1.00
λμk
-0.2-0.1Δ < 0
0.050.100.150.200.25Δ ≥ 0φ = 2.5 α = 1 n = 1000(h)
5101520
0.00 0.25 0.50 0.75 1.00
λμk
-0.15-0.10-0.05Δ < 0
0.050.100.15Δ ≥ 0φ = 2.5 α = 1.5 n = 1000(i)
Fig. 19 Same as in Fig. 9 but with = 2:5.The advent and fall of a vocabulary learning bias from communicative eciency 53
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.100-0.075-0.050-0.025Δ < 0φ = 0.5 μk = 1 n = 10(a)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.006-0.004-0.002Δ < 0
0.000250.000500.000750.00100Δ ≥ 0φ = 0.5 μk = 1 n = 100(b)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-6e-04-4e-04-2e-04Δ < 0
0.000050.000100.00015Δ ≥ 0φ = 0.5 μk = 1 n = 1000(c)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.12-0.08-0.04Δ < 0
0.0050.010Δ ≥ 0φ = 0.5 μk = 2 n = 10(d)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.012-0.009-0.006-0.003Δ < 0
0.000250.000500.00075Δ ≥ 0φ = 0.5 μk = 2 n = 100(e)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.00100-0.00075-0.00050-0.00025Δ < 0
5e-051e-04Δ ≥ 0φ = 0.5 μk = 2 n = 1000(f)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.20-0.15-0.10-0.05Δ < 0
0.020.040.06Δ ≥ 0φ = 0.5 μk = 4 n = 10(g)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.015-0.010-0.005Δ < 0
0.0010.0020.0030.004Δ ≥ 0φ = 0.5 μk = 4 n = 100(h)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.0015-0.0010-0.0005Δ < 0
1e-042e-043e-04Δ ≥ 0φ = 0.5 μk = 4 n = 1000(i)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.3-0.2-0.1Δ < 0
0.050.100.15Δ ≥ 0φ = 0.5 μk = 8 n = 10(j)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.03-0.02-0.01Δ < 0
0.0050.010Δ ≥ 0φ = 0.5 μk = 8 n = 100(k)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.002-0.001Δ < 0
3e-046e-049e-04Δ ≥ 0φ = 0.5 μk = 8 n = 1000(l)
Fig. 20 The same as in Fig. 12 but with = 0:5.54 David Carrera-Casado, Ramon Ferrer-i-Cancho
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.20-0.15-0.10-0.05Δ < 0
0.010.020.030.04Δ ≥ 0φ = 1.5 μk = 1 n = 10(a)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.016-0.012-0.008-0.004Δ < 0
0.0020.0040.006Δ ≥ 0φ = 1.5 μk = 1 n = 100(b)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.0016-0.0012-0.0008-0.0004Δ < 0
0.000250.000500.000750.00100Δ ≥ 0φ = 1.5 μk = 1 n = 1000(c)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.6-0.5-0.4-0.3-0.2-0.1Δ < 0
0.10.20.3Δ ≥ 0φ = 1.5 μk = 2 n = 10(d)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.06-0.04-0.02Δ < 0
0.010.020.03Δ ≥ 0φ = 1.5 μk = 2 n = 100(e)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.006-0.005-0.004-0.003-0.002-0.001Δ < 0
0.0010.002Δ ≥ 0φ = 1.5 μk = 2 n = 1000(f)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-1.5-1.0-0.5Δ < 0
0.250.500.751.001.25Δ ≥ 0φ = 1.5 μk = 4 n = 10(g)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.20-0.15-0.10-0.05Δ < 0
0.050.100.150.20Δ ≥ 0φ = 1.5 μk = 4 n = 100(h)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.020-0.015-0.010-0.005Δ < 0
0.0050.0100.0150.020Δ ≥ 0φ = 1.5 μk = 4 n = 1000(i)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-3-2-1Δ < 0
12Δ ≥ 0φ = 1.5 μk = 8 n = 10(j)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.8-0.6-0.4-0.2Δ < 0
0.20.40.6Δ ≥ 0φ = 1.5 μk = 8 n = 100(k)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.075-0.050-0.025Δ < 0
0.0250.0500.075Δ ≥ 0φ = 1.5 μk = 8 n = 1000(l)
Fig. 21 The same as in Fig. 12 but with = 1:5.The advent and fall of a vocabulary learning bias from communicative eciency 55
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.3-0.2-0.1Δ < 0
0.020.040.06Δ ≥ 0φ = 2 μk = 1 n = 10(a)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.03-0.02-0.01Δ < 0
0.0030.0060.0090.012Δ ≥ 0φ = 2 μk = 1 n = 100(b)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.0025-0.0020-0.0015-0.0010-0.0005Δ < 0
0.00040.00080.0012Δ ≥ 0φ = 2 μk = 1 n = 1000(c)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-1.00-0.75-0.50-0.25Δ < 0
0.20.40.6Δ ≥ 0φ = 2 μk = 2 n = 10(d)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.12-0.08-0.04Δ < 0
0.0250.0500.0750.100Δ ≥ 0φ = 2 μk = 2 n = 100(e)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.015-0.010-0.005Δ < 0
0.00250.00500.00750.0100Δ ≥ 0φ = 2 μk = 2 n = 1000(f)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-2.0-1.5-1.0-0.5Δ < 0
0.51.01.52.0Δ ≥ 0φ = 2 μk = 4 n = 10(g)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.6-0.4-0.2Δ < 0
0.20.40.6Δ ≥ 0φ = 2 μk = 4 n = 100(h)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.075-0.050-0.025Δ < 0
0.020.040.060.08Δ ≥ 0φ = 2 μk = 4 n = 1000(i)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-3-2-1Δ < 0
123Δ ≥ 0φ = 2 μk = 8 n = 10(j)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-2.5-2.0-1.5-1.0-0.5Δ < 0
0.51.01.52.02.5Δ ≥ 0φ = 2 μk = 8 n = 100(k)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.4-0.3-0.2-0.1Δ < 0
0.10.20.30.4Δ ≥ 0φ = 2 μk = 8 n = 1000(l)
Fig. 22 The same as in Fig. 12 but with = 2.56 David Carrera-Casado, Ramon Ferrer-i-Cancho
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.4-0.3-0.2-0.1Δ < 0
0.050.10Δ ≥ 0φ = 2.5 μk = 1 n = 10(a)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.05-0.04-0.03-0.02-0.01Δ < 0
0.0040.0080.0120.016Δ ≥ 0φ = 2.5 μk = 1 n = 100(b)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.004-0.003-0.002-0.001Δ < 0
0.00050.00100.00150.0020Δ ≥ 0φ = 2.5 μk = 1 n = 1000(c)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-1.0-0.5Δ < 0
0.30.60.9Δ ≥ 0φ = 2.5 μk = 2 n = 10(d)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.3-0.2-0.1Δ < 0
0.10.2Δ ≥ 0φ = 2.5 μk = 2 n = 100(e)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.03-0.02-0.01Δ < 0
0.010.020.03Δ ≥ 0φ = 2.5 μk = 2 n = 1000(f)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-2-1Δ < 0
12Δ ≥ 0φ = 2.5 μk = 4 n = 10(g)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-1.5-1.0-0.5Δ < 0
0.51.01.5Δ ≥ 0φ = 2.5 μk = 4 n = 100(h)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.3-0.2-0.1Δ < 0
0.10.20.3Δ ≥ 0φ = 2.5 μk = 4 n = 1000(i)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-3-2-1Δ < 0
123Δ ≥ 0φ = 2.5 μk = 8 n = 10(j)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-4-3-2-1Δ < 0
1234Δ ≥ 0φ = 2.5 μk = 8 n = 100(k)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-1.5-1.0-0.5Δ < 0
0.51.01.5Δ ≥ 0φ = 2.5 μk = 8 n = 1000(l)
Fig. 23 The same as in Fig. 12 but with = 2:5.The advent and fall of a vocabulary learning bias from communicative eciency 57
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.100-0.075-0.050-0.025Δ < 0φ = 0.5 μk = 1 α = 0.5(a)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.03-0.02-0.01Δ < 0
0.000050.000100.000150.000200.00025Δ ≥ 0φ = 0.5 μk = 1 α = 1(b)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.03-0.02-0.01Δ < 0
0.0010.0020.003Δ ≥ 0φ = 0.5 μk = 1 α = 1.5(c)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.12-0.08-0.04Δ < 0
0.0050.010Δ ≥ 0φ = 0.5 μk = 2 α = 0.5(d)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.06-0.04-0.02Δ < 0
2.5e-055.0e-057.5e-05Δ ≥ 0φ = 0.5 μk = 2 α = 1(e)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.02-0.01Δ < 0
0.00050.00100.0015Δ ≥ 0φ = 0.5 μk = 2 α = 1.5(f)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.20-0.15-0.10-0.05Δ < 0
0.020.040.06Δ ≥ 0φ = 0.5 μk = 4 α = 0.5(g)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.100-0.075-0.050-0.025Δ < 0
0.0020.0040.0060.008Δ ≥ 0φ = 0.5 μk = 4 α = 1(h)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.02-0.01Δ < 0
3e-046e-049e-04Δ ≥ 0φ = 0.5 μk = 4 α = 1.5(i)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.3-0.2-0.1Δ < 0
0.050.100.15Δ ≥ 0φ = 0.5 μk = 8 α = 0.5(j)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.16-0.12-0.08-0.04Δ < 0
0.010.020.030.040.050.06Δ ≥ 0φ = 0.5 μk = 8 α = 1(k)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.05-0.04-0.03-0.02-0.01Δ < 0
2e-044e-046e-04Δ ≥ 0φ = 0.5 μk = 8 α = 1.5(l)
Fig. 24 The same as in Fig. 14 but with = 0:5.58 David Carrera-Casado, Ramon Ferrer-i-Cancho
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.20-0.15-0.10-0.05Δ < 0
0.00050.00100.00150.00200.0025Δ ≥ 0φ = 1.5 μk = 1 α = 0.5(a)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.08-0.06-0.04-0.02Δ < 0
0.0020.0040.0060.008Δ ≥ 0φ = 1.5 μk = 1 α = 1(b)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.09-0.06-0.03Δ < 0
0.010.020.030.04Δ ≥ 0φ = 1.5 μk = 1 α = 1.5(c)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.6-0.5-0.4-0.3-0.2-0.1Δ < 0
0.10.20.3Δ ≥ 0φ = 1.5 μk = 2 α = 0.5(d)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.20-0.15-0.10-0.05Δ < 0
0.020.040.06Δ ≥ 0φ = 1.5 μk = 2 α = 1(e)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.08-0.06-0.04-0.02Δ < 0
0.0050.0100.0150.020Δ ≥ 0φ = 1.5 μk = 2 α = 1.5(f)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-1.5-1.0-0.5Δ < 0
0.250.500.751.001.25Δ ≥ 0φ = 1.5 μk = 4 α = 0.5(g)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.8-0.6-0.4-0.2Δ < 0
0.20.40.6Δ ≥ 0φ = 1.5 μk = 4 α = 1(h)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.3-0.2-0.1Δ < 0
0.050.100.15Δ ≥ 0φ = 1.5 μk = 4 α = 1.5(i)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-3-2-1Δ < 0
12Δ ≥ 0φ = 1.5 μk = 8 α = 0.5(j)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-2.0-1.5-1.0-0.5Δ < 0
0.51.01.52.0Δ ≥ 0φ = 1.5 μk = 8 α = 1(k)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.9-0.6-0.3Δ < 0
0.250.500.751.00Δ ≥ 0φ = 1.5 μk = 8 α = 1.5(l)
Fig. 25 The same as in Fig. 14 but with = 1:5.The advent and fall of a vocabulary learning bias from communicative eciency 59
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.3-0.2-0.1Δ < 0
0.010.020.030.040.05Δ ≥ 0φ = 2 μk = 1 α = 0.5(a)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.075-0.050-0.025Δ < 0
0.0030.0060.009Δ ≥ 0φ = 2 μk = 1 α = 1(b)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.15-0.10-0.05Δ < 0
0.020.040.06Δ ≥ 0φ = 2 μk = 1 α = 1.5(c)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-1.00-0.75-0.50-0.25Δ < 0
0.20.40.6Δ ≥ 0φ = 2 μk = 2 α = 0.5(d)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.5-0.4-0.3-0.2-0.1Δ < 0
0.050.100.150.200.25Δ ≥ 0φ = 2 μk = 2 α = 1(e)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.075-0.050-0.025Δ < 0
0.010.020.03Δ ≥ 0φ = 2 μk = 2 α = 1.5(f)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-2.0-1.5-1.0-0.5Δ < 0
0.51.01.52.0Δ ≥ 0φ = 2 μk = 4 α = 0.5(g)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-1.5-1.0-0.5Δ < 0
0.51.01.5Δ ≥ 0φ = 2 μk = 4 α = 1(h)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.8-0.6-0.4-0.2Δ < 0
0.20.40.6Δ ≥ 0φ = 2 μk = 4 α = 1.5(i)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-3-2-1Δ < 0
123Δ ≥ 0φ = 2 μk = 8 α = 0.5(j)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-3-2-1Δ < 0
123Δ ≥ 0φ = 2 μk = 8 α = 1(k)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-2.0-1.5-1.0-0.5Δ < 0
0.51.01.52.0Δ ≥ 0φ = 2 μk = 8 α = 1.5(l)
Fig. 26 The same as in Fig. 14 but with = 2.60 David Carrera-Casado, Ramon Ferrer-i-Cancho
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.4-0.3-0.2-0.1Δ < 0
0.050.10Δ ≥ 0φ = 2.5 μk = 1 α = 0.5(a)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.125-0.100-0.075-0.050-0.025Δ < 0
0.0030.0060.0090.012Δ ≥ 0φ = 2.5 μk = 1 α = 1(b)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.15-0.10-0.05Δ < 0
0.020.040.06Δ ≥ 0φ = 2.5 μk = 1 α = 1.5(c)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-1.0-0.5Δ < 0
0.30.60.9Δ ≥ 0φ = 2.5 μk = 2 α = 0.5(d)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.8-0.6-0.4-0.2Δ < 0
0.10.20.30.40.50.6Δ ≥ 0φ = 2.5 μk = 2 α = 1(e)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.2-0.1Δ < 0
0.020.040.060.08Δ ≥ 0φ = 2.5 μk = 2 α = 1.5(f)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-2-1Δ < 0
12Δ ≥ 0φ = 2.5 μk = 4 α = 0.5(g)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-2.0-1.5-1.0-0.5Δ < 0
0.51.01.52.0Δ ≥ 0φ = 2.5 μk = 4 α = 1(h)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-1.2-0.8-0.4Δ < 0
0.51.0Δ ≥ 0φ = 2.5 μk = 4 α = 1.5(i)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-4-3-2-1Δ < 0
1234Δ ≥ 0φ = 2.5 μk = 8 α = 0.5(j)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-3-2-1Δ < 0
123Δ ≥ 0φ = 2.5 μk = 8 α = 1(k)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-2-1Δ < 0
12Δ ≥ 0φ = 2.5 μk = 8 α = 1.5(l)
Fig. 27 The same as in Fig. 14 but with = 2:5.The advent and fall of a vocabulary learning bias from communicative eciency 61
D Complementary gures with discrete degrees
To investigate the class of skeleta such that the degree of counterparts does not exceed one,
we have assumed that the relationship between the degree of a vertex and its rank follows a
power-law (Eq. 15). For the plots of the regions where strategy ais advantageous, we have
assumed, for simplicity, that the degree of a form is a continuous variable. As form degrees are
actually discrete in the model, here we show the impact of rounding form degrees de ned by
Eq. 15 to the nearest integer in previous gures.
The correspondence between the gures in this appendix with rounded form degrees and
the gures in other sections is as follows. Figs. 28, 29, 30, 31, 32 and 33 are equivalent to
Figs. 9, 16, 10, 17, 18 and 19, respectively. These are the gures where is on thex-axis and
kon they-axis of the heatmap. Fig. 34, that summarizes the boundaries of the heatmaps,
corresponds to Fig. 11 after discretization. Figs. 35, 36, 37, 38 and 39 are equivalent to Figs.
20, 12, 21, 22 and 23, respectively. In these gures, is placed on the y-axis instead. Fig.
40 summarizes the boundaries and is the discretized version of Fig. 13. Finally, Fig. 41, 42,
43, 44 and 45 are equivalent to Figs. 24, 14, 25, 26 and 27, respectively. This set places non
they-axis. The boundaries in these last discretized gures are summarized by Fig. 46, that
corresponds to Figure 15.
We have presented two kinds of gures: heatmaps showing the value of and gures
summarizing the boundaries between regions where  > 0 and < 0. Interestingly, the
discretization does not change the presence of regions where < 0 and> 0 and in general,
it does not change the shape of the regions in a qualitative sense except in some cases where
remarkable distortions appear (e.g., Figs. 32 or 33 have one or very few integer values on
they-axis for certain combinations of parameters, forming one dimensional bands that don't
change over that axis; see also the distorted shapes in Figs. 38 and specially 45). In contrast,
123
0.00 0.25 0.50 0.75 1.00
λμk
-0.075-0.050-0.025Δ < 0
0Δ ≥ 0φ = 0 α = 0.5 n = 10(a)
2.55.07.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.05-0.04-0.03-0.02-0.01Δ < 0
0Δ ≥ 0φ = 0 α = 1 n = 10(b)
01020
0.00 0.25 0.50 0.75 1.00
λμk
-0.025-0.020-0.015-0.010-0.005Δ < 0
0Δ ≥ 0φ = 0 α = 1.5 n = 10(c)
2.55.07.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.006-0.004-0.002Δ < 0
0Δ ≥ 0φ = 0 α = 0.5 n = 100(d)
0255075100
0.00 0.25 0.50 0.75 1.00
λμk
-0.002-0.001Δ < 0
0Δ ≥ 0φ = 0 α = 1 n = 100(e)
02505007501000
0.00 0.25 0.50 0.75 1.00
λμk
-5e-04-4e-04-3e-04-2e-04-1e-04Δ < 0
0Δ ≥ 0φ = 0 α = 1.5 n = 100(f)
0102030
0.00 0.25 0.50 0.75 1.00
λμk
-6e-04-4e-04-2e-04Δ < 0
0Δ ≥ 0φ = 0 α = 0.5 n = 1000(g)
02505007501000
0.00 0.25 0.50 0.75 1.00
λμk
-0.00015-0.00010-0.00005Δ < 0
0Δ ≥ 0φ = 0 α = 1 n = 1000(h)
0100002000030000
0.00 0.25 0.50 0.75 1.00
λμk
-1.6e-05-1.2e-05-8.0e-06-4.0e-06Δ < 0
0Δ ≥ 0φ = 0 α = 1.5 n = 1000(i)
Fig. 28 Figure equivalent to Fig. 9 after discretization of the 0
is.62 David Carrera-Casado, Ramon Ferrer-i-Cancho
0.51.01.52.02.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.15-0.10-0.05Δ < 0
0.0050.0100.015Δ ≥ 0φ = 0.5 α = 0.5 n = 10(a)
1234
0.00 0.25 0.50 0.75 1.00
λμk
-0.09-0.06-0.03Δ < 0
0.00250.00500.0075Δ ≥ 0φ = 0.5 α = 1 n = 10(b)
2.55.07.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.06-0.04-0.02Δ < 0
0.0010.0020.0030.0040.005Δ ≥ 0φ = 0.5 α = 1.5 n = 10(c)
1234
0.00 0.25 0.50 0.75 1.00
λμk
-0.015-0.010-0.005Δ < 0
0.0010.0020.0030.004Δ ≥ 0φ = 0.5 α = 0.5 n = 100(d)
05101520
0.00 0.25 0.50 0.75 1.00
λμk
-0.0125-0.0100-0.0075-0.0050-0.0025Δ < 0
0.0010.0020.0030.0040.005Δ ≥ 0φ = 0.5 α = 1 n = 100(e)
0255075100
0.00 0.25 0.50 0.75 1.00
λμk
-0.003-0.002-0.001Δ < 0
0.00050.00100.0015Δ ≥ 0φ = 0.5 α = 1.5 n = 100(f)
2.55.07.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.003-0.002-0.001Δ < 0
5e-041e-03Δ ≥ 0φ = 0.5 α = 0.5 n = 1000(g)
0255075100
0.00 0.25 0.50 0.75 1.00
λμk
-0.0020-0.0015-0.0010-0.0005Δ < 0
0.00050.00100.0015Δ ≥ 0φ = 0.5 α = 1 n = 1000(h)
02505007501000
0.00 0.25 0.50 0.75 1.00
λμk
-3e-04-2e-04-1e-04Δ < 0
1e-042e-04Δ ≥ 0φ = 0.5 α = 1.5 n = 1000(i)
Fig. 29 Figure equivalent to Fig. 16 after discretization of the 0
is.
the discretization has drastic impact on the summary plots of the boundary curves, where the
curvy shapes of the continuous case are lost and altered substantially in many cases (Fig. 34,
where some curves become one or a few points, or Fig. 40, re ecting the loss of the curvy
shapes).The advent and fall of a vocabulary learning bias from communicative eciency 63
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λμk
-0.16-0.12-0.08-0.04Δ < 0φ = 1 α = 0.5 n = 10(a)
123
0.00 0.25 0.50 0.75 1.00
λμk
-0.20-0.15-0.10-0.05Δ < 0
0.010.020.030.040.05Δ ≥ 0φ = 1 α = 1 n = 10(b)
12345
0.00 0.25 0.50 0.75 1.00
λμk
-0.10-0.05Δ < 0
0.010.020.030.040.05Δ ≥ 0φ = 1 α = 1.5 n = 10(c)
123
0.00 0.25 0.50 0.75 1.00
λμk
-0.05-0.04-0.03-0.02-0.01Δ < 0
0.0050.0100.0150.020Δ ≥ 0φ = 1 α = 0.5 n = 100(d)
2.55.07.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.03-0.02-0.01Δ < 0
0.010.02Δ ≥ 0φ = 1 α = 1 n = 100(e)
0102030
0.00 0.25 0.50 0.75 1.00
λμk
-0.020-0.015-0.010-0.005Δ < 0
0.0050.0100.015Δ ≥ 0φ = 1 α = 1.5 n = 100(f)
12345
0.00 0.25 0.50 0.75 1.00
λμk
-0.0075-0.0050-0.0025Δ < 0
0.0020.0040.006Δ ≥ 0φ = 1 α = 0.5 n = 1000(g)
0102030
0.00 0.25 0.50 0.75 1.00
λμk
-0.010-0.005Δ < 0
0.00250.00500.00750.0100Δ ≥ 0φ = 1 α = 1 n = 1000(h)
050100150
0.00 0.25 0.50 0.75 1.00
λμk
-0.004-0.003-0.002-0.001Δ < 0
0.0010.0020.0030.004Δ ≥ 0φ = 1 α = 1.5 n = 1000(i)
Fig. 30 Figure equivalent to Fig. 10 after discretization of the 0
is.64 David Carrera-Casado, Ramon Ferrer-i-Cancho
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λμk
-0.15-0.12-0.09-0.06Δ < 0φ = 1.5 α = 0.5 n = 10(a)
0.51.01.52.02.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.2-0.1Δ < 0
0.020.040.06Δ ≥ 0φ = 1.5 α = 1 n = 10(b)
123
0.00 0.25 0.50 0.75 1.00
λμk
-0.12-0.09-0.06-0.03Δ < 0
0.010.020.030.040.05Δ ≥ 0φ = 1.5 α = 1.5 n = 10(c)
0.51.01.52.02.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.06-0.04-0.02Δ < 0
0.010.02Δ ≥ 0φ = 1.5 α = 0.5 n = 100(d)
246
0.00 0.25 0.50 0.75 1.00
λμk
-0.100-0.075-0.050-0.025Δ < 0
0.0250.0500.075Δ ≥ 0φ = 1.5 α = 1 n = 100(e)
051015
0.00 0.25 0.50 0.75 1.00
λμk
-0.06-0.04-0.02Δ < 0
0.010.020.030.040.050.06Δ ≥ 0φ = 1.5 α = 1.5 n = 100(f)
123
0.00 0.25 0.50 0.75 1.00
λμk
-0.015-0.010-0.005Δ < 0
0.00250.00500.00750.0100Δ ≥ 0φ = 1.5 α = 0.5 n = 1000(g)
051015
0.00 0.25 0.50 0.75 1.00
λμk
-0.04-0.03-0.02-0.01Δ < 0
0.010.020.030.04Δ ≥ 0φ = 1.5 α = 1 n = 1000(h)
0204060
0.00 0.25 0.50 0.75 1.00
λμk
-0.020-0.015-0.010-0.005Δ < 0
0.0050.0100.0150.020Δ ≥ 0φ = 1.5 α = 1.5 n = 1000(i)
Fig. 31 Figure equivalent to Fig. 17 after discretization of the 0
is.The advent and fall of a vocabulary learning bias from communicative eciency 65
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λμk
-0.5-0.4-0.3-0.2-0.1Δ < 0
0.050.100.150.20Δ ≥ 0φ = 2 α = 0.5 n = 10(a)
0.51.01.52.02.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.4-0.3-0.2-0.1Δ < 0
0.050.100.150.20Δ ≥ 0φ = 2 α = 1 n = 10(b)
123
0.00 0.25 0.50 0.75 1.00
λμk
-0.4-0.3-0.2-0.1Δ < 0
0.050.100.150.200.25Δ ≥ 0φ = 2 α = 1.5 n = 10(c)
0.51.01.52.02.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.15-0.10-0.05Δ < 0
0.030.060.09Δ ≥ 0φ = 2 α = 0.5 n = 100(d)
1234
0.00 0.25 0.50 0.75 1.00
λμk
-0.10-0.05Δ < 0
0.0250.0500.0750.100Δ ≥ 0φ = 2 α = 1 n = 100(e)
2.55.07.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.125-0.100-0.075-0.050-0.025Δ < 0
0.030.060.090.12Δ ≥ 0φ = 2 α = 1.5 n = 100(f)
123
0.00 0.25 0.50 0.75 1.00
λμk
-0.04-0.03-0.02-0.01Δ < 0
0.010.020.030.04Δ ≥ 0φ = 2 α = 0.5 n = 1000(g)
2.55.07.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.075-0.050-0.025Δ < 0
0.020.040.060.08Δ ≥ 0φ = 2 α = 1 n = 1000(h)
0102030
0.00 0.25 0.50 0.75 1.00
λμk
-0.06-0.04-0.02Δ < 0
0.020.040.06Δ ≥ 0φ = 2 α = 1.5 n = 1000(i)
Fig. 32 Figure equivalent to Fig. 18 after discretization of the 0
is.66 David Carrera-Casado, Ramon Ferrer-i-Cancho
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λμk
-0.6-0.4-0.2Δ < 0
0.10.20.3Δ ≥ 0φ = 2.5 α = 0.5 n = 10(a)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λμk
-0.15-0.10-0.05Δ < 0φ = 2.5 α = 1 n = 10(b)
0.51.01.52.02.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.20-0.15-0.10-0.05Δ < 0
0.0250.0500.0750.1000.125Δ ≥ 0φ = 2.5 α = 1.5 n = 10(c)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λμk
-0.05-0.04-0.03-0.02-0.01Δ < 0
0.00250.00500.0075Δ ≥ 0φ = 2.5 α = 0.5 n = 100(d)
123
0.00 0.25 0.50 0.75 1.00
λμk
-0.16-0.12-0.08-0.04Δ < 0
0.050.10Δ ≥ 0φ = 2.5 α = 1 n = 100(e)
246
0.00 0.25 0.50 0.75 1.00
λμk
-0.3-0.2-0.1Δ < 0
0.10.20.3Δ ≥ 0φ = 2.5 α = 1.5 n = 100(f)
0.51.01.52.02.5
0.00 0.25 0.50 0.75 1.00
λμk
-0.04-0.03-0.02-0.01Δ < 0
0.010.020.03Δ ≥ 0φ = 2.5 α = 0.5 n = 1000(g)
246
0.00 0.25 0.50 0.75 1.00
λμk
-0.20-0.15-0.10-0.05Δ < 0
0.050.100.150.20Δ ≥ 0φ = 2.5 α = 1 n = 1000(h)
05101520
0.00 0.25 0.50 0.75 1.00
λμk
-0.15-0.10-0.05Δ < 0
0.050.100.15Δ ≥ 0φ = 2.5 α = 1.5 n = 1000(i)
Fig. 33 Figure equivalent to Fig. 19 after discretization of the 0
is.The advent and fall of a vocabulary learning bias from communicative eciency 67
1.001.251.501.752.00
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
2
2.5α = 0.5 n = 10(a)
2.02.53.03.54.0
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2α = 1 n = 10(b)
2.55.07.5
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 1.5 n = 10(c)
1234
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 0.5 n = 100(d)
05101520
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 1 n = 100(e)
0255075100
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 1.5 n = 100(f)
2.55.07.5
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 0.5 n = 1000(g)
0255075100
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 1 n = 1000(h)
02505007501000
0.00 0.25 0.50 0.75 1.00
λμkφ
0.5
1
1.5
2
2.5α = 1.5 n = 1000(i)
Fig. 34 Figure equivalent to Fig. 11 after discretization of the 0
is. It summarizes Figs. 29,
30, 31, 32 and 33.68 David Carrera-Casado, Ramon Ferrer-i-Cancho
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.09-0.06-0.03Δ < 0φ = 0.5 μk = 1 n = 10(a)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.008-0.006-0.004-0.002Δ < 0
0.000250.000500.000750.00100Δ ≥ 0φ = 0.5 μk = 1 n = 100(b)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-6e-04-4e-04-2e-04Δ < 0
0.000050.000100.00015Δ ≥ 0φ = 0.5 μk = 1 n = 1000(c)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.15-0.10-0.05Δ < 0
0.0050.0100.015Δ ≥ 0φ = 0.5 μk = 2 n = 10(d)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.0125-0.0100-0.0075-0.0050-0.0025Δ < 0
0.000250.000500.00075Δ ≥ 0φ = 0.5 μk = 2 n = 100(e)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-9e-04-6e-04-3e-04Δ < 0
5e-051e-04Δ ≥ 0φ = 0.5 μk = 2 n = 1000(f)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.25-0.20-0.15-0.10-0.05Δ < 0
0.020.040.060.08Δ ≥ 0φ = 0.5 μk = 4 n = 10(g)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.015-0.010-0.005Δ < 0
0.0010.0020.0030.004Δ ≥ 0φ = 0.5 μk = 4 n = 100(h)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.0015-0.0010-0.0005Δ < 0
1e-042e-043e-04Δ ≥ 0φ = 0.5 μk = 4 n = 1000(i)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.4-0.3-0.2-0.1Δ < 0
0.050.100.15Δ ≥ 0φ = 0.5 μk = 8 n = 10(j)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.03-0.02-0.01Δ < 0
0.0050.010Δ ≥ 0φ = 0.5 μk = 8 n = 100(k)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.002-0.001Δ < 0
0.000250.000500.000750.00100Δ ≥ 0φ = 0.5 μk = 8 n = 1000(l)
Fig. 35 Figure equivalent to Fig. 20 after discretization of the 0
is.The advent and fall of a vocabulary learning bias from communicative eciency 69
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.16-0.12-0.08-0.04Δ < 0
0.0050.0100.0150.020Δ ≥ 0φ = 1 μk = 1 n = 10(a)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.009-0.006-0.003Δ < 0
0.0010.0020.0030.004Δ ≥ 0φ = 1 μk = 1 n = 100(b)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.00100-0.00075-0.00050-0.00025Δ < 0
1e-042e-043e-044e-045e-04Δ ≥ 0φ = 1 μk = 1 n = 1000(c)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.3-0.2-0.1Δ < 0
0.0250.0500.0750.100Δ ≥ 0φ = 1 μk = 2 n = 10(d)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.02-0.01Δ < 0
0.0020.0040.006Δ ≥ 0φ = 1 μk = 2 n = 100(e)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.0025-0.0020-0.0015-0.0010-0.0005Δ < 0
1e-042e-043e-044e-045e-04Δ ≥ 0φ = 1 μk = 2 n = 1000(f)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.6-0.4-0.2Δ < 0
0.10.20.30.4Δ ≥ 0φ = 1 μk = 4 n = 10(g)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.06-0.04-0.02Δ < 0
0.010.020.030.04Δ ≥ 0φ = 1 μk = 4 n = 100(h)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.006-0.004-0.002Δ < 0
0.0010.0020.003Δ ≥ 0φ = 1 μk = 4 n = 1000(i)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-1.5-1.0-0.5Δ < 0
0.250.500.751.001.25Δ ≥ 0φ = 1 μk = 8 n = 10(j)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.15-0.10-0.05Δ < 0
0.050.100.15Δ ≥ 0φ = 1 μk = 8 n = 100(k)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.015-0.010-0.005Δ < 0
0.0050.010Δ ≥ 0φ = 1 μk = 8 n = 1000(l)
Fig. 36 Figure equivalent to Fig. 12 after discretization of the 0
is.70 David Carrera-Casado, Ramon Ferrer-i-Cancho
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.15-0.10-0.05Δ < 0
0.010.020.030.040.05Δ ≥ 0φ = 1.5 μk = 1 n = 10(a)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.015-0.010-0.005Δ < 0
0.0020.0040.0060.008Δ ≥ 0φ = 1.5 μk = 1 n = 100(b)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.0015-0.0010-0.0005Δ < 0
0.000250.000500.000750.00100Δ ≥ 0φ = 1.5 μk = 1 n = 1000(c)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.5-0.4-0.3-0.2-0.1Δ < 0
0.050.100.150.200.25Δ ≥ 0φ = 1.5 μk = 2 n = 10(d)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.06-0.04-0.02Δ < 0
0.010.02Δ ≥ 0φ = 1.5 μk = 2 n = 100(e)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.006-0.004-0.002Δ < 0
0.0010.002Δ ≥ 0φ = 1.5 μk = 2 n = 1000(f)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-1.0-0.5Δ < 0
0.30.60.9Δ ≥ 0φ = 1.5 μk = 4 n = 10(g)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.25-0.20-0.15-0.10-0.05Δ < 0
0.050.100.150.20Δ ≥ 0φ = 1.5 μk = 4 n = 100(h)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.02-0.01Δ < 0
0.0050.0100.0150.020Δ ≥ 0φ = 1.5 μk = 4 n = 1000(i)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-2-1Δ < 0
0.51.01.52.02.5Δ ≥ 0φ = 1.5 μk = 8 n = 10(j)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.8-0.6-0.4-0.2Δ < 0
0.20.40.60.8Δ ≥ 0φ = 1.5 μk = 8 n = 100(k)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.100-0.075-0.050-0.025Δ < 0
0.0250.0500.0750.100Δ ≥ 0φ = 1.5 μk = 8 n = 1000(l)
Fig. 37 Figure equivalent to Fig. 21 after discretization of the 0
is.The advent and fall of a vocabulary learning bias from communicative eciency 71
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.5-0.4-0.3-0.2-0.1Δ < 0
0.050.100.150.20Δ ≥ 0φ = 2 μk = 1 n = 10(a)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.03-0.02-0.01Δ < 0
0.00250.00500.00750.01000.0125Δ ≥ 0φ = 2 μk = 1 n = 100(b)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.0025-0.0020-0.0015-0.0010-0.0005Δ < 0
0.00040.00080.00120.0016Δ ≥ 0φ = 2 μk = 1 n = 1000(c)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-1.0-0.5Δ < 0
0.250.500.751.00Δ ≥ 0φ = 2 μk = 2 n = 10(d)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.15-0.10-0.05Δ < 0
0.030.060.09Δ ≥ 0φ = 2 μk = 2 n = 100(e)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.016-0.012-0.008-0.004Δ < 0
0.00250.00500.00750.0100Δ ≥ 0φ = 2 μk = 2 n = 1000(f)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-2-1Δ < 0
0.51.01.52.02.5Δ ≥ 0φ = 2 μk = 4 n = 10(g)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.8-0.6-0.4-0.2Δ < 0
0.20.40.60.8Δ ≥ 0φ = 2 μk = 4 n = 100(h)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.100-0.075-0.050-0.025Δ < 0
0.0250.0500.075Δ ≥ 0φ = 2 μk = 4 n = 1000(i)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-4-3-2-1Δ < 0
123Δ ≥ 0φ = 2 μk = 8 n = 10(j)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-2-1Δ < 0
12Δ ≥ 0φ = 2 μk = 8 n = 100(k)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.5-0.4-0.3-0.2-0.1Δ < 0
0.10.20.30.40.5Δ ≥ 0φ = 2 μk = 8 n = 1000(l)
Fig. 38 Figure equivalent to Fig. 22 after discretization of the 0
is.72 David Carrera-Casado, Ramon Ferrer-i-Cancho
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.6-0.4-0.2Δ < 0
0.10.20.3Δ ≥ 0φ = 2.5 μk = 1 n = 10(a)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.05-0.04-0.03-0.02-0.01Δ < 0
0.0050.0100.015Δ ≥ 0φ = 2.5 μk = 1 n = 100(b)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.004-0.003-0.002-0.001Δ < 0
0.00050.00100.00150.0020Δ ≥ 0φ = 2.5 μk = 1 n = 1000(c)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-1.5-1.0-0.5Δ < 0
0.51.0Δ ≥ 0φ = 2.5 μk = 2 n = 10(d)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.3-0.2-0.1Δ < 0
0.10.20.3Δ ≥ 0φ = 2.5 μk = 2 n = 100(e)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.04-0.03-0.02-0.01Δ < 0
0.010.020.03Δ ≥ 0φ = 2.5 μk = 2 n = 1000(f)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-3-2-1Δ < 0
12Δ ≥ 0φ = 2.5 μk = 4 n = 10(g)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-2.0-1.5-1.0-0.5Δ < 0
0.51.01.5Δ ≥ 0φ = 2.5 μk = 4 n = 100(h)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-0.3-0.2-0.1Δ < 0
0.10.20.3Δ ≥ 0φ = 2.5 μk = 4 n = 1000(i)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-3-2-1Δ < 0
123Δ ≥ 0φ = 2.5 μk = 8 n = 10(j)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-4-3-2-1Δ < 0
1234Δ ≥ 0φ = 2.5 μk = 8 n = 100(k)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λα
-2.0-1.5-1.0-0.5Δ < 0
0.51.01.52.0Δ ≥ 0φ = 2.5 μk = 8 n = 1000(l)
Fig. 39 Figure equivalent to Fig. 23 after discretization of the 0
is.The advent and fall of a vocabulary learning bias from communicative eciency 73
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
1
1.5
2
2.5μk = 1 n = 10(a)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 1 n = 100(b)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 1 n = 1000(c)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 2 n = 10(d)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 2 n = 100(e)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 2 n = 1000(f)
0.60.81.01.21.4
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1μk = 4 n = 10(g)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 4 n = 100(h)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 4 n = 1000(i)
1.01.11.21.31.41.5
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5μk = 8 n = 10(j)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2μk = 8 n = 100(k)
0.500.751.001.251.50
0.00 0.25 0.50 0.75 1.00
λαφ
0
0.5
1
1.5
2
2.5μk = 8 n = 1000(l)
Fig. 40 Figure equivalent to Fig. 13 after discretization of the 0
is. It summarizes Figs. 35,
36, 37, 38 and 39.74 David Carrera-Casado, Ramon Ferrer-i-Cancho
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.09-0.06-0.03Δ < 0φ = 0.5 μk = 1 α = 0.5(a)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.04-0.03-0.02-0.01Δ < 0
1e-042e-04Δ ≥ 0φ = 0.5 μk = 1 α = 1(b)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.03-0.02-0.01Δ < 0
0.0010.0020.003Δ ≥ 0φ = 0.5 μk = 1 α = 1.5(c)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.15-0.10-0.05Δ < 0
0.0050.0100.015Δ ≥ 0φ = 0.5 μk = 2 α = 0.5(d)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.06-0.04-0.02Δ < 0
2.5e-055.0e-057.5e-051.0e-04Δ ≥ 0φ = 0.5 μk = 2 α = 1(e)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.02-0.01Δ < 0
0.00050.00100.0015Δ ≥ 0φ = 0.5 μk = 2 α = 1.5(f)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.25-0.20-0.15-0.10-0.05Δ < 0
0.020.040.060.08Δ ≥ 0φ = 0.5 μk = 4 α = 0.5(g)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.09-0.06-0.03Δ < 0
0.00250.00500.0075Δ ≥ 0φ = 0.5 μk = 4 α = 1(h)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.03-0.02-0.01Δ < 0
3e-046e-049e-04Δ ≥ 0φ = 0.5 μk = 4 α = 1.5(i)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.4-0.3-0.2-0.1Δ < 0
0.050.100.15Δ ≥ 0φ = 0.5 μk = 8 α = 0.5(j)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.15-0.10-0.05Δ < 0
0.020.040.06Δ ≥ 0φ = 0.5 μk = 8 α = 1(k)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.05-0.04-0.03-0.02-0.01Δ < 0
2e-044e-046e-04Δ ≥ 0φ = 0.5 μk = 8 α = 1.5(l)
Fig. 41 Figure equivalent to Fig. 24 after discretization of the 0
is.The advent and fall of a vocabulary learning bias from communicative eciency 75
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.16-0.12-0.08-0.04Δ < 0φ = 1 μk = 1 α = 0.5(a)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.06-0.04-0.02Δ < 0
0.0010.0020.0030.004Δ ≥ 0φ = 1 μk = 1 α = 1(b)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.06-0.04-0.02Δ < 0
0.0050.0100.0150.020Δ ≥ 0φ = 1 μk = 1 α = 1.5(c)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.3-0.2-0.1Δ < 0
0.0250.0500.0750.100Δ ≥ 0φ = 1 μk = 2 α = 0.5(d)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.100-0.075-0.050-0.025Δ < 0
5e-041e-03Δ ≥ 0φ = 1 μk = 2 α = 1(e)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.06-0.04-0.02Δ < 0
0.00250.00500.00750.01000.0125Δ ≥ 0φ = 1 μk = 2 α = 1.5(f)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.6-0.4-0.2Δ < 0
0.10.20.30.4Δ ≥ 0φ = 1 μk = 4 α = 0.5(g)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.3-0.2-0.1Δ < 0
0.050.10Δ ≥ 0φ = 1 μk = 4 α = 1(h)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.075-0.050-0.025Δ < 0
0.0020.0040.006Δ ≥ 0φ = 1 μk = 4 α = 1.5(i)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-1.5-1.0-0.5Δ < 0
0.250.500.751.001.25Δ ≥ 0φ = 1 μk = 8 α = 0.5(j)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.6-0.4-0.2Δ < 0
0.10.20.30.40.5Δ ≥ 0φ = 1 μk = 8 α = 1(k)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.3-0.2-0.1Δ < 0
0.050.100.150.20Δ ≥ 0φ = 1 μk = 8 α = 1.5(l)
Fig. 42 Figure equivalent to Fig. 14 after discretization of the 0
is.76 David Carrera-Casado, Ramon Ferrer-i-Cancho
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.15-0.10-0.05Δ < 0φ = 1.5 μk = 1 α = 0.5(a)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.075-0.050-0.025Δ < 0
0.0030.0060.0090.012Δ ≥ 0φ = 1.5 μk = 1 α = 1(b)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.12-0.09-0.06-0.03Δ < 0
0.010.020.030.040.05Δ ≥ 0φ = 1.5 μk = 1 α = 1.5(c)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.5-0.4-0.3-0.2-0.1Δ < 0
0.050.100.150.200.25Δ ≥ 0φ = 1.5 μk = 2 α = 0.5(d)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.2-0.1Δ < 0
0.020.040.06Δ ≥ 0φ = 1.5 μk = 2 α = 1(e)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.100-0.075-0.050-0.025Δ < 0
0.010.02Δ ≥ 0φ = 1.5 μk = 2 α = 1.5(f)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-1.0-0.5Δ < 0
0.30.60.9Δ ≥ 0φ = 1.5 μk = 4 α = 0.5(g)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.75-0.50-0.25Δ < 0
0.20.40.6Δ ≥ 0φ = 1.5 μk = 4 α = 1(h)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.2-0.1Δ < 0
0.050.10Δ ≥ 0φ = 1.5 μk = 4 α = 1.5(i)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-2-1Δ < 0
0.51.01.52.02.5Δ ≥ 0φ = 1.5 μk = 8 α = 0.5(j)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-2.0-1.5-1.0-0.5Δ < 0
0.51.01.52.0Δ ≥ 0φ = 1.5 μk = 8 α = 1(k)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-1.00-0.75-0.50-0.25Δ < 0
0.250.500.75Δ ≥ 0φ = 1.5 μk = 8 α = 1.5(l)
Fig. 43 Figure equivalent to Fig. 25 after discretization of the 0
is.The advent and fall of a vocabulary learning bias from communicative eciency 77
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.5-0.4-0.3-0.2-0.1Δ < 0
0.050.100.150.20Δ ≥ 0φ = 2 μk = 1 α = 0.5(a)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.125-0.100-0.075-0.050-0.025Δ < 0
0.010.02Δ ≥ 0φ = 2 μk = 1 α = 1(b)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.15-0.10-0.05Δ < 0
0.020.040.06Δ ≥ 0φ = 2 μk = 1 α = 1.5(c)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-1.0-0.5Δ < 0
0.250.500.751.00Δ ≥ 0φ = 2 μk = 2 α = 0.5(d)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.4-0.3-0.2-0.1Δ < 0
0.050.100.150.20Δ ≥ 0φ = 2 μk = 2 α = 1(e)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.06-0.04-0.02Δ < 0
0.010.020.03Δ ≥ 0φ = 2 μk = 2 α = 1.5(f)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-2-1Δ < 0
0.51.01.52.02.5Δ ≥ 0φ = 2 μk = 4 α = 0.5(g)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-1.6-1.2-0.8-0.4Δ < 0
0.51.0Δ ≥ 0φ = 2 μk = 4 α = 1(h)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.8-0.6-0.4-0.2Δ < 0
0.20.40.6Δ ≥ 0φ = 2 μk = 4 α = 1.5(i)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-4-3-2-1Δ < 0
123Δ ≥ 0φ = 2 μk = 8 α = 0.5(j)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-3-2-1Δ < 0
123Δ ≥ 0φ = 2 μk = 8 α = 1(k)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-2.0-1.5-1.0-0.5Δ < 0
0.51.01.52.0Δ ≥ 0φ = 2 μk = 8 α = 1.5(l)
Fig. 44 Figure equivalent to Fig. 26 after discretization of the 0
is.78 David Carrera-Casado, Ramon Ferrer-i-Cancho
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.6-0.4-0.2Δ < 0
0.10.20.3Δ ≥ 0φ = 2.5 μk = 1 α = 0.5(a)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.15-0.10-0.05Δ < 0
0.010.020.03Δ ≥ 0φ = 2.5 μk = 1 α = 1(b)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.20-0.15-0.10-0.05Δ < 0
0.0250.0500.0750.1000.125Δ ≥ 0φ = 2.5 μk = 1 α = 1.5(c)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-1.5-1.0-0.5Δ < 0
0.51.0Δ ≥ 0φ = 2.5 μk = 2 α = 0.5(d)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.5-0.4-0.3-0.2-0.1Δ < 0
0.10.20.3Δ ≥ 0φ = 2.5 μk = 2 α = 1(e)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-0.09-0.06-0.03Δ < 0
0.020.040.06Δ ≥ 0φ = 2.5 μk = 2 α = 1.5(f)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-3-2-1Δ < 0
12Δ ≥ 0φ = 2.5 μk = 4 α = 0.5(g)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-1.5-1.0-0.5Δ < 0
0.51.01.5Δ ≥ 0φ = 2.5 μk = 4 α = 1(h)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-1.00-0.75-0.50-0.25Δ < 0
0.250.500.75Δ ≥ 0φ = 2.5 μk = 4 α = 1.5(i)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-4-3-2-1Δ < 0
1234Δ ≥ 0φ = 2.5 μk = 8 α = 0.5(j)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-3-2-1Δ < 0
123Δ ≥ 0φ = 2.5 μk = 8 α = 1(k)
102505007501000
0.00 0.25 0.50 0.75 1.00
λn
-2.5-2.0-1.5-1.0-0.5Δ < 0
0.51.01.52.0Δ ≥ 0φ = 2.5 μk = 8 α = 1.5(l)
Fig. 45 Figure equivalent to Fig. 27 after discretization of the 0
is.The advent and fall of a vocabulary learning bias from communicative eciency 79
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
1.5
2
2.5μk = 1 α = 0.5(a)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 1 α = 1(b)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 1 α = 1.5(c)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 2 α = 0.5(d)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 2 α = 1(e)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 2 α = 1.5(f)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1μk = 4 α = 0.5(g)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 4 α = 1(h)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 4 α = 1.5(i)
2505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5μk = 8 α = 0.5(j)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2μk = 8 α = 1(k)
02505007501000
0.00 0.25 0.50 0.75 1.00
λnφ
0
0.5
1
1.5
2
2.5μk = 8 α = 1.5(l)
Fig. 46 Figure equivalent to Fig. 15 after discretization of the 0
is. It summarizes Figs. 41,
42, 43, 44 and 45.