|
Testing the limits of natural language models for |
|
predicting human language judgments |
|
Tal Golan1,2∗†, Matthew Siegelman3∗, |
|
Nikolaus Kriegeskorte1,3,4,5, Christopher Baldassano3 |
|
1Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY , USA |
|
2Department of Cognitive and Brain Sciences, Ben-Gurion University of the Negev, Be’er-Sheva, Israel |
|
3Department of Psychology, Columbia University, New York, NY , USA |
|
4Department of Neuroscience, Columbia University, New York, NY , USA |
|
5Department of Electrical Engineering, Columbia University, New York, NY , USA |
|
∗The first two authors contributed equally to this work. |
|
†To whom correspondence should be addressed; E-mail: [email protected] |
|
Neural network language models appear to be increasingly aligned with how hu- |
|
mans process and generate language, but identifying their weaknesses through ad- |
|
versarial examples is challenging due to the discrete nature of language and the |
|
complexity of human language perception. We bypass these limitations by turning |
|
the models against each other. We generate controversial sentence pairs for which |
|
two language models disagree about which sentence is more likely to occur. Con- |
|
sidering nine language models (including n-gram, recurrent neural networks, and |
|
transformers), we created hundreds of controversial sentence pairs through syn- |
|
thetic optimization or by selecting sentences from a corpus. Controversial sentence |
|
pairs proved highly effective at revealing model failures and identifying models that |
|
aligned most closely with human judgments of which sentence is more likely. The |
|
most human-consistent model tested was GPT-2, although experiments also revealed |
|
significant shortcomings of its alignment with human perception. |
|
Keywords— Language Models, Human Acceptability Judgments, Controversial Stimuli, Adversarial Attacks |
|
in NLP |
|
1 Introduction |
|
Neural network language models are not only key tools in natural language processing (NLP) but are also draw- |
|
ing an increasing scientific interest as potential models of human language-processing. Ranging from recurrent |
|
neural networks [1, 2] to transformers [3–7], each of these language models (explicitly or implicitly) defines |
|
a probability distribution over strings of words, predicting which sequences are likely to occur in natural lan- |
|
guage. There is substantial evidence from measures such as reading times [8], functional MRI [9], scalp EEG |
|
1arXiv:2204.03592v3 [cs.CL] 12 Sep 2023[10], and intracranial ECoG [11] that humans are sensitive to the relative probabilities of words and sentences as |
|
captured by language models, even among sentences that are grammatically correct and semantically meaning- |
|
ful. Furthermore, model-derived sentence probabilities can also predict human graded acceptability judgments |
|
[12, 13]. These successes, however, have not yet addressed two central questions of interest: (1) Which of the |
|
models is best-aligned with human language processing? (2) How close is the best-aligned model to the goal |
|
of fully capturing human judgments? |
|
A predominant approach for evaluating language models is to use a set of standardized benchmarks such |
|
as those in the General Language Understanding Evaluation (GLUE) [14], or its successor, SuperGLUE [15]. |
|
Though instrumental in evaluating the utility of language models for downstream NLP tasks, these benchmarks |
|
prove insufficient for comparing such models as candidate explanations of human language-processing. Many |
|
components of these benchmarks do not aim to measure human alignment but rather the usefulness of the mod- |
|
els’ language representation when tuned to a specific downstream task. Some benchmarks challenge language |
|
models more directly by comparing the probabilities they assign to grammatical and ungrammatical sentences |
|
(e.g., BLiMP [16]). However, since such benchmarks are driven by theoretical linguistic considerations, they |
|
might fail to detect novel, unexpected ways in which language models may diverge from human language un- |
|
derstanding. Lastly, an additional practical concern is that the rapid pace of NLP research has led to quick |
|
saturation of these kinds of static benchmarks, making it difficult to distinguish between models [17]. |
|
One proposed solution to these issues is the use of dynamic human-in-the-loop benchmarks in which peo- |
|
ple actively stress-test models with an evolving set of tests. However, this approach faces the major obstacle |
|
that “finding interesting examples is rapidly becoming a less trivial task” [17]. We propose to complement |
|
human-curated benchmarks with model-driven evaluation. Guided by model predictions rather than experi- |
|
menter intuitions, we would like to identify particularly informative test sentences, where different models |
|
make divergent predictions. This approach of running experiments mathematically optimized to “put in jeop- |
|
ardy” particular models belongs to a long-standing scientific philosophy of design optimization [18]. We can |
|
find these critical sentences in large corpora of natural language or synthesize novel test sentences that reveal |
|
how different models generalize beyond their training distributions. |
|
We propose here a systematic, model-driven approach for comparing language models in terms of their |
|
consistency with human judgments. We generate controversial sentence pairs : pairs of sentences designed |
|
such that two language models strongly disagree about which sentence is more likely to occur. In each of these |
|
sentence pairs, one model assigns a higher probability to the first sentence than the second sentence, while the |
|
other model prefers the second sentence to the first. We then collect human judgments of which sentence in |
|
each pair is more probable to settle this dispute between the two models. |
|
This approach builds on previous work on controversial images for models of visual classification [19]. |
|
That work relied on absolute judgments of a single stimulus, which are appropriate for classification responses. |
|
However, asking the participants to rate each sentence’s probability on an absolute scale is complicated by |
|
between-trial context effects common in magnitude estimation tasks [20–22], which have been shown to impact |
|
judgments like acceptability [23]. A binary forced-choice behavioral task presenting the participants with a |
|
choice between two sentences in each trial, the approach we used here, minimizes the role of between-trial |
|
context effects by setting an explicit local context within each trial. Such an approach has been previously |
|
used for measuring sentence acceptability [24] and provides substantially more statistical power compared to |
|
designs in which acceptability ratings are provided for single sentences [25]. |
|
Our experiments demonstrate that (1) it is possible to procedurally generate controversial sentence pairs for |
|
all common classes of language models, either by selecting pairs of sentences from a corpus or by iteratively |
|
modifying natural sentences to yield controversial predictions; (2) the resulting controversial sentence pairs |
|
enable efficient model comparison between models that otherwise are seemingly equivalent in their human |
|
2consistency; and (3) all current NLP model classes incorrectly assign high probability to some non-natural |
|
sentences (one can modify a natural sentence such that its model probability does not decrease but human |
|
observers reject the sentence as unnatural). This framework for model comparison and model testing can give |
|
us new insight into the classes of models that best align with human language perception and suggest directions |
|
for future model development. |
|
2 Results |
|
We acquired judgments from 100 native English speakers tested online. In each experimental trial, the partic- |
|
ipants were asked to judge which of two sentences they would be “more likely to encounter in the world, as |
|
either speech or written text”, and provided a rating of their confidence in their answer on a 3-point scale (see |
|
Extended Data Fig. 1 for a trial example). The experiment was designed to compare nine different language |
|
models (Supplementary Section 6.1): probability models based on corpus frequencies of 2-word and 3-word |
|
sequences (2-grams and 3-grams) and a range of neural network models comprising a recurrent neural network |
|
(RNN), a long short-term memory network (LSTM), and five transformer models (BERT, RoBERTa, XLM, |
|
ELECTRA, and GPT-2). |
|
2.1 Efficient model comparison using natural controversial pairs |
|
As a baseline, we randomly sampled and paired 8-word sentences from a corpus of Reddit comments. How- |
|
ever, as shown in Fig. 1a, these sentences fail to uncover meaningful differences between the models. For |
|
each sentence pair, all models tend to prefer the same sentence (Extended Data Fig. 2), and therefore perform |
|
similarly in predicting human preference ratings (see Supplementary Section 7.1). |
|
Instead, we can use an optimization procedure (Supplementary Section 6.2) to search for controversial |
|
sentence pairs, in which one language model assigns a high probability (above the median probability for |
|
natural sentences) only to sentence 1 and a second language model assigns a high probability only to sentence |
|
2; see examples in Table 1. Measuring each model’s accuracy in predicting human choices for sentence pairs |
|
in which it was one of the two targeted models indicated many significant differences in terms of model- |
|
human alignment (Fig. 1b), with GPT-2 and RoBERTa showing the best human consistency and 2-gram the |
|
worst. We can also compare each model pair separately (using only the stimuli targeting that model pair), |
|
yielding a similar pattern of pairwise dominance (Extended Data Fig. 3a). All models except GPT-2, RoBERTa, |
|
and ELECTRA performed significantly below our lower bound on the noise ceiling (the accuracy obtained |
|
by predicting each participant’s responses from the other participants’ responses), indicating a misalignment |
|
between these models’ predictions and human judgments which was only revealed when using controversial |
|
sentence pairs. |
|
30 25% 50% 75% 100% |
|
human-choice prediction accuracyGPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-gram |
|
0 50 100 |
|
GPT-2 |
|
p(sentence) percentile020406080100RoBERTa |
|
p(sentence) percentile |
|
0 25% 50% 75% 100% |
|
human-choice prediction accuracyGPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-gram |
|
0 50 100 |
|
GPT-2 |
|
p(sentence) percentile020406080100RoBERTa |
|
p(sentence) percentile |
|
aRandomly sampled natural-sentence pairs |
|
bControversial natural-sentence pairsFigure 1: Model comparison using natural sentences. (a) (Left) Percentile-transformed sentence probabili- |
|
ties for GPT-2 and RoBERTa (defined relative to all sentences used in the experiment) for randomly-sampled |
|
pairs of natural sentences. Each pair of connected dots depicts one sentence pair. The two models are highly |
|
congruent in their rankings of sentences within a pair (lines have upward slope). (Right) Accuracy of model |
|
predictions of human choices, measured as the proportion of trials in which the same sentence was preferred |
|
by both the model and the human participant. Each dot depicts the prediction accuracy of one candidate model |
|
averaged across a group of 10 participants presented with a unique set of trials. The colored bars depict grand- |
|
averages across all 100 participants. The gray bar is the noise ceiling whose left and right edges are lower |
|
and upper bounds on the grand-average performance an ideal model would achieve (based on the consistency |
|
across human subjects). There were no significant differences in model performance on the randomly sam- |
|
pled natural sentences. (b)(Left) Controversial natural-sentence pairs were selected such that the models’ |
|
sentence probability ranks were incongruent (lines have downward slope). (Right) Controversial sentence pairs |
|
enable efficient model comparison, revealing that BERT, XLM, LSTM, RNN and the n-gram models perform |
|
significantly below the noise ceiling (asterisks indicate significance—two-sided Wilcoxon signed-rank test, |
|
controlling the false discovery rate for nine comparisons at q<.05). On the right of the plot, each closed circle |
|
indicates a model significantly dominating alternative models indicated by open circles (two-sided Wilcoxon |
|
signed-rank test, controlling the false discovery rate for all 36 model pairs at q<.05). GPT-2 outperforms all |
|
models except RoBERTA at predicting human judgments. |
|
4sentence log probability (model 1) log probability (model 2) # human choices |
|
n1: Rust is generally caused by salt and sand. logp(n1|GPT-2 ) =−50.72 log p(n1|ELECTRA ) =−38.54 10 |
|
n2: Where is Vernon Roche when you need him. logp(n2|GPT-2 ) =−32.26 logp(n2|ELECTRA ) =−58.26 0 |
|
n1: Excellent draw and an overall great smoking experience. logp(n1|RoBERTa ) =−67.78 log p(n1|GPT-2 ) =−36.76 10 |
|
n2: I should be higher and tied to inflation. logp(n2|RoBERTa ) =−54.61 logp(n2|GPT-2 ) =−50.31 0 |
|
n1: You may try and ask on their forum. logp(n1|ELECTRA ) =−51.44 log p(n1|LSTM ) =−44.24 10 |
|
n2: I love how they look like octopus tentacles. logp(n2|ELECTRA ) =−35.51 logp(n2|LSTM ) =−66.66 0 |
|
n1: Grow up and quit whining about minor inconveniences. logp(n1|BERT ) =−82.74 log p(n1|GPT-2 ) =−35.66 10 |
|
n2: The extra a is the correct Sanskrit pronunciation. logp(n2|BERT ) =−51.06 logp(n2|GPT-2 ) =−51.10 0 |
|
n1: I like my password manager for this reason. logp(n1|XLM ) =−68.93 log p(n1|RoBERTa ) =−49.61 10 |
|
n2: Kind of like clan of the cave bear. logp(n2|XLM ) =−44.24 logp(n2|RoBERTa ) =−67.00 0 |
|
n1: We have raised a Generation of Computer geeks. logp(n1|LSTM ) =−66.41 log p(n1|ELECTRA ) =−36.57 10 |
|
n2: I mean when the refs are being sketchy. logp(n2|LSTM ) =−42.04 logp(n2|ELECTRA ) =−52.28 0 |
|
n1: This is getting ridiculous and ruining the hobby. logp(n1|RNN) =−100.65 log p(n1|LSTM ) =−43.50 10 |
|
n2: I think the boys and invincible are better. logp(n2|RNN) =−45.16 logp(n2|LSTM ) =−59.00 0 |
|
n1: Then attach them with the supplied wood screws. logp(n1|3-gram ) =−119.09 log p(n1|GPT-2 ) =−34.84 10 |
|
n2: Sounds like you were used both a dog. logp(n2|3-gram ) =−92.07 logp(n2|GPT-2 ) =−52.84 0 |
|
n1: Cream cheese with ham and onions on crackers. logp(n1|2-gram ) =−131.99 log p(n1|RoBERTa ) =−54.62 10 |
|
n2: I may have to parallel process that drinking. logp(n2|2-gram ) =−109.46 logp(n2|RoBERTa ) =−70.69 0 |
|
Table 1: Examples of controversial natural-sentence pairs that maximally contributed to each model’s |
|
prediction error. For each model (double row, “model 1”), the table shows results for two sentences on which |
|
the model failed severely. In each case, the failing model 1 prefers sentence n2(higher log probability bolded), |
|
while the model it was pitted against (“model 2”) and all 10 human subjects presented with that sentence pair |
|
prefer sentence n1. (When more than one sentence pair induced an equal maximal error in a model, the example |
|
included in the table was chosen at random.) |
|
2.2 Greater model disentanglement with synthetic sentence pairs |
|
Selecting controversial natural-sentence pairs may provide greater power than randomly sampling natural- |
|
sentence pairs, but this search procedure considers a very limited part of the space of possible sentence pairs. |
|
Instead, we can iteratively replace words in a natural sentence to drive different models to make opposing |
|
predictions, forming synthetic controversial sentences that may lay outside any natural language corpora, as il- |
|
lustrated in Fig. 2 (see Methods, “Generating synthetic controversial sentence pairs” for full details). Examples |
|
of controversial synthetic-sentence pairs that maximally contributed to the models’ prediction error appear in |
|
Table 2. |
|
We evaluated how well each model predicted the human sentence choices in all of the controversial synthetic- |
|
sentence pairs in which the model was one of the two models targeted (Fig. 3a). This evaluation of model- |
|
human alignment resulted in an even greater separation between the models’ prediction accuracies than was |
|
obtained when using controversial natural-sentence pairs, pushing the weaker models (RNN, 3-gram, and 2- |
|
gram) far below the 50% chance accuracy level. GPT-2, RoBERTa, and ELECTRA were found to be signifi- |
|
cantly more accurate than the alternative models (BERT, XLM, LSTM, RNN, 3-gram, and 2-gram) in predicting |
|
the human responses to these trials (with similar results when comparing model pair separately, see Extended |
|
Data Fig. 3b). All of the models except for GPT-2 were found to be significantly below the lower bound on the |
|
noise ceiling, demonstrating misalignment with human judgments. |
|
50 20 40 60 80 100 |
|
GPT-2 |
|
p(sentence) percentile020406080100ELECTRA |
|
p(sentence) percentileNothing has |
|
a world of |
|
excitement |
|
and joys. |
|
Diddy has a wealth of experience |
|
with grappling.Luke has a ton of experience |
|
with winning. |
|
a |
|
0 20 40 60 80 100 |
|
RoBERTa |
|
p(sentence) percentile0204060801003-gram |
|
p(sentence) percentileYou have to realize |
|
is that noise again. |
|
I wait to see how |
|
it shakes out.I need to see how |
|
this played out. |
|
bFigure 2: Synthesizing controversial sentence pairs. The small open dots denote 500 randomly sampled |
|
natural sentences. The big open dot denotes the natural sentence used for initializing the controversial sentence |
|
optimization, and the closed dots are the resulting synthetic sentences. (a)In this example, we start with the |
|
randomly sampled natural sentence “Luke has a ton of experience with winning”. If we adjust this sentence to |
|
minimize its probability according to GPT-2 (while keeping the sentence at least as likely as the natural sentence |
|
according to ELECTRA), we obtain the synthetic sentence “Nothing has a world of excitement and joys”. By |
|
repeating this procedure while switching the roles of the models, we generate the synthetic sentence “Diddy |
|
has a wealth of experience with grappling”, which decreases ELECTRA’s probability while slightly increasing |
|
GPT-2’s. (b)In this example, we start with the randomly sampled natural sentence “I need to see how this |
|
played out”. If we adjust this sentence to minimize its probability according to RoBERTa (while keeping the |
|
sentence at least as likely as the natural sentence according to 3-gram), we obtain the synthetic sentence “You |
|
have to realize is that noise again”. If we instead decrease only 3-gram’s probability, we generate the synthetic |
|
sentence “I wait to see how it shakes out”. |
|
2.3 Pairs of natural and synthetic sentences uncover blindspots |
|
Last, we considered trials in which the participants were asked to choose between a natural sentence and one of |
|
the synthetic sentences which was generated from that natural sentence. If the language model is fully aligned |
|
with human judgments, we would expect humans to agree with the model, and select the synthetic sentence |
|
at least as much as the natural sentence. In reality, human participants showed a systematic preference for the |
|
natural sentences over their synthetic counterparts (Fig. 3b), even when the synthetic sentences were formed |
|
such that the stronger models (i.e., GPT-2, RoBERTA, or ELECTRA) favored them over the natural sentences; |
|
see Extended Data Table 1 for examples. Evaluating natural sentence preference separately for each model- |
|
pairing (Extended Data Fig. 4), we find that these imperfections can be uncovered even when pairing a strong |
|
model with a relatively weak model (such that the strong model “accepts” the synthetic sentence and the weak |
|
model rejects it). |
|
60 25% 50% 75% 100% |
|
human-choice prediction accuracyGPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-gram |
|
0 50 100 |
|
GPT-2 |
|
p(sentence) percentile020406080100RoBERTa |
|
p(sentence) percentile |
|
0 25% 50% 75% 100% |
|
human-choice prediction accuracyGPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-gram |
|
0 50 100 |
|
GPT-2 |
|
p(sentence) percentile020406080100RoBERTa |
|
p(sentence) percentile |
|
aSynthetic controversial sentence pairs |
|
bSynthetic vs. natural sentences |
|
natural sentence synthetic sentenceFigure 3: Model comparison using synthetic sentences. (a) (Left) Percentile-transformed sentence probabil- |
|
ities for GPT-2 and RoBERTa for controversial synthetic-sentence pairs. Each pair of connected dots depict |
|
one sentence pair. (Right) Model prediction accuracy, measured as the proportion of trials in which the same |
|
sentence was preferred by both the model and the human participant. GPT-2, RoBERTa and ELECTRA sig- |
|
nificantly outperformed the other models (two-sided Wilcoxon signed-rank test, controlling the false discovery |
|
rate for all 36 model comparisons at q<.05). All of the models except for GPT-2 were found to perform below |
|
the noise ceiling (gray) of predicting each participant’s choices from the majority votes of the other participants |
|
(asterisks indicate significance—two-sided Wilcoxon signed-rank test, controlling the false discovery rate for |
|
nine comparisons at q<.05). (b)(Left) Each connected triplet of dots depicts a natural sentence and its derived |
|
synthetic sentences, optimized to decrease the probability only under GPT-2 (left dots in a triplet) or only under |
|
RoBERTa (bottom dots in a triplet). (Right) Each model was evaluated across all of the synthetic-natural sen- |
|
tence pairs for which it was targeted to keep the synthetic sentence at least as probable as the natural sentence |
|
(see Extended Data Fig. 6 for the complementary data binning). This evaluation yielded a below-chance pre- |
|
diction accuracy for all of the models, which was also significantly below the lower bound on the noise ceiling. |
|
This indicates that, although the models assessed that these synthetic sentences were at least as probable as the |
|
original natural sentence, humans disagreed and showed a systematic preference for the natural sentence. See |
|
Fig. 1’s caption for details on the visualization conventions used in this figure. |
|
7sentence log probability (model 1) log probability (model 2) # human choices |
|
s1: You can reach his stories on an instant. logp(s1|GPT-2 ) =−64.92 log p(s1|RoBERTa ) =−59.98 10 |
|
s2: Anybody can behead a rattles an an antelope. logp(s2|GPT-2 ) =−40.45 logp(s2|RoBERTa ) =−90.87 0 |
|
s1: However they will still compare you to others. logp(s1|RoBERTa ) =−53.40 log p(s1|GPT-2 ) =−31.59 10 |
|
s2: Why people who only give themselves to others. logp(s2|RoBERTa ) =−48.66 logp(s2|GPT-2 ) =−47.13 0 |
|
s1: He healed faster than any professional sports player. logp(s1|ELECTRA ) =−48.77 log p(s1|BERT ) =−50.21 10 |
|
s2: One gets less than a single soccer team. logp(s2|ELECTRA ) =−38.25 logp(s2|BERT ) =−59.09 0 |
|
s1: That is the narrative we have been sold. logp(s1|BERT ) =−56.14 log p(s1|GPT-2 ) =−26.31 10 |
|
s2: This is the week you have been dying. logp(s2|BERT ) =−50.66 logp(s2|GPT-2 ) =−39.50 0 |
|
s1: The resilience is made stronger by early adversity. logp(s1|XLM ) =−62.95 log p(s1|RoBERTa ) =−54.34 10 |
|
s2: Every thing is made alive by infinite Ness. logp(s2|XLM ) =−42.95 logp(s2|RoBERTa ) =−75.72 0 |
|
s1: President Trump threatens to storm the White House. logp(s1|LSTM ) =−58.78 log p(s1|RoBERTa ) =−41.67 10 |
|
s2: West Surrey refused to form the White House. logp(s2|LSTM ) =−40.35 logp(s2|RoBERTa ) =−67.32 0 |
|
s1: Las beans taste best with a mustard sauce. logp(s1|RNN) =−131.62 log p(s1|RoBERTa ) =−60.58 10 |
|
s2: Roughly lanes being alive in a statement ratings. logp(s2|RNN) =−49.31 logp(s2|RoBERTa ) =−99.90 0 |
|
s1: You are constantly seeing people play the multi. logp(s1|3-gram ) =−107.16 log p(s1|ELECTRA ) =−44.79 10 |
|
s2: This will probably the happiest contradicts the hypocrite. logp(s2|3-gram ) =−91.59 logp(s2|ELECTRA ) =−75.83 0 |
|
s1: A buyer can own a genuine product also. logp(s1|2-gram ) =−127.35 log p(s1|ELECTRA ) =−40.21 10 |
|
s2: One versed in circumference of highschool I rambled. logp(s2|2-gram ) =−113.73 logp(s2|ELECTRA ) =−92.61 0 |
|
Table 2: Examples of controversial synthetic-sentence pairs that maximally contributed to each model’s |
|
prediction error. For each model (double row, “model 1”), the table shows results for two sentences on which |
|
the model failed severely. In each case, the failing model 1 prefers sentence s2(higher log probability bolded), |
|
while the model it was pitted against (“model 2”) and all 10 human subjects presented with that sentence pair |
|
prefer sentence s1. (When more than one sentence pair induced an equal maximal error in a model, the example |
|
included in the table was chosen at random.) |
|
81.0 |
|
0.5 |
|
0.0 0.5 1.0 |
|
ordinal correlation between human ratings and models' |
|
sentence pair probability log-ratio (signed-rank cosine similarity)GPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-gram |
|
Figure 4: Ordinal correlation of the models’ sentence probability log-ratios and human Likert ratings. |
|
For each sentence pair, model prediction was quantified by logp(s1|m) |
|
p(s2|m). This log-ratio was correlated with the |
|
Likert ratings of each particular participant, using signed-rank cosine similarity (see Methods). This analysis, |
|
taking all trials and human confidence level into account, indicates that GPT-2 performed best in predicting |
|
human sentence probability judgments. However, its predictions are still significantly misaligned with the |
|
human choices. See Fig. 1’s caption for details on the visualization convention. |
|
2.4 Evaluating the entire dataset reveals a hierarchy of models |
|
Rather than evaluating each model’s prediction accuracy with respect to the particular sentence pairs that were |
|
formed to compare this model to alternative models, we can maximize our statistical power by computing the |
|
average prediction accuracy for each model with respect to all of the experimental trials we collected. Fur- |
|
thermore, rather than binarizing the human and model judgments, here we measure the ordinal correspondence |
|
between the graded human choices (taking confidence into account) and the log ratio of the sentence proba- |
|
bilities assigned by each candidate model. Using this more sensitive benchmark (Fig. 4), we found GPT-2 to |
|
be the most human-aligned, followed by RoBERTa; then ELECTRA; BERT; XLM and LSTM; and the RNN, |
|
3-gram, and 2-gram models. However, all of the models (including GPT-2) were found to be significantly less |
|
accurate than the lower bound on the noise ceiling. |
|
One possible reason for the poorer performance of the bidirectional transformers (RoBERTa, ELECTRA, |
|
BERT, and XLM) compared to the unidirectional transformer (GPT-2) is that computing sentence probabilities |
|
in these models is complex, and the probability estimator we developed (see Methods, “Evaluating sentence |
|
probabilities in transformer models”) could be non-optimal; Indeed, the popular pseudo-log-likelihood (PLL) |
|
approach yields slightly higher accuracy for randomly sampled natural-sentence pairs (Extended Data Fig. 5a). |
|
And yet, when we directly compared our estimator to PLL by means of generating and administrating new |
|
synthetic controversial sentences, our estimator was found to be markedly better aligned to human judgments |
|
(Extended Data Fig. 5b and Extended Data Table 2). |
|
Finally, a control analysis employing probability measures normalized by token count revealed that such |
|
normalization had minimal influence on the observed differences among models (Supplementary Section 7.2 |
|
and Supplementary Fig. S1). |
|
93 Discussion |
|
In this study, we probed the ability of language models to predict human relative sentence probability judgments |
|
using controversial sentence pairs, selected or synthesized so that two models disagreed about which sentence |
|
was more probable. We found that (1) GPT-2 (a unidirectional transformer model trained on predicting up- |
|
coming tokens) and RoBERTa (a bidirectional transformer trained on a held-out token prediction task) were the |
|
most predictive of human judgments on controversial natural-sentence pairs (Fig. 1b); (2) GPT-2, RoBERTa, |
|
and ELECTRA (a bidirectional transformer trained on detecting corrupted tokens) were the most predictive of |
|
human judgments on pairs of sentences synthesized to maximize controversiality (Fig. 3a); and (3) GPT-2 was |
|
the most human-consistent model when considering the entire behavioral dataset we collected (Fig. 4). And yet, |
|
all of the models, including GPT-2, exhibited behavior inconsistent with human judgments; using an alternative |
|
model as a counterforce, we could corrupt natural sentences such that their probability under a model did not |
|
decrease, but humans tended to reject the corrupted sentence as unlikely (Fig. 3b). |
|
3.1 Implications for computational psycholinguistic modeling |
|
Unlike convolutional neural networks, whose architectural design principles are roughly inspired by biological |
|
vision [26], the design of current neural network language models is largely uninformed by psycholinguistics |
|
and neuroscience. And yet, there is an ongoing effort to adopt and adapt neural network language models |
|
to serve as computational hypotheses of how humans process language, making use of a variety of different |
|
architectures, training corpora, and training tasks [11, 27–35]. We found that recurrent neural networks make |
|
markedly human-inconsistent predictions once pitted against transformer-based neural networks. This find- |
|
ing coincides with recent evidence that transformers also outperform recurrent networks for predicting neural |
|
responses as measured by ECoG or fMRI [11, 32], as well as with evidence from model-based prediction |
|
of human reading speed [33, 36] and N400 amplitude [36, 37]. Among the transformers, GPT-2, RoBERTa, |
|
and ELECTRA showed the best performance. These models are trained to optimize only word-level pre- |
|
diction tasks, as opposed to BERT and XLM which are additionally trained on next-sentence prediction and |
|
cross-lingual tasks, respectively (and have the same architecture as RoBERTa). This suggests that local word |
|
prediction provides better alignment with human language comprehension. |
|
Despite the agreement between our results and previous work in terms of model ranking, the significant |
|
failure of GPT-2 in predicting the human responses to natural versus synthetic controversial pairs (Fig. 3b) |
|
demonstrates that GPT-2 does not fully emulate the computations employed in human processing of even short |
|
sentences. This outcome is in some ways unsurprising, given that GPT-2 (like all of the other models we con- |
|
sidered) is an off-the-shelf machine learning model that was not designed with human psycholinguistic and |
|
physiological details in mind. And yet, the considerable human inconsistency we observed seems to stand in |
|
stark contrast with the recent report of GPT-2 explaining about 100 percent of the explainable variance in fMRI |
|
and ECoG responses to natural sentences [32]. Part of this discrepancy could be explained by the fact that |
|
Schrimpf and colleagues [32] mapped GPT-2 hidden-layer activations to brain data by means of regularized |
|
linear regression, which can identify a subspace within GPT-2’s language representation that is well-aligned |
|
with brain responses even if GPT-2’s overall sentence probabilities are not human-like. More importantly, |
|
when language models are evaluated with natural language, strong statistical models might capitalize on fea- |
|
tures in the data that are distinct from, but highly correlated with, features that are meaningful to humans. |
|
Therefore, a model that performs well on typical sentences might employ computational mechanisms that are |
|
very distinct from the brain’s, which will only be revealed by testing the model in a more challenging domain. |
|
Note that even the simplest model we considered—a 2-gram frequency table—actually performed quite well |
|
10on predicting human judgments for randomly-sampled natural sentences, and its deficiencies only became ob- |
|
vious when challenged by controversial sentence pairs. We predict that there will be substantial discrepancies |
|
between neural representations and current language models when using stimuli that intentionally stress-test |
|
this relationship, using our proposed sentence-level controversiality approach or complementary ideas such as |
|
maximizing controversial transition probabilities between consecutive words [38]. |
|
Using controversial sentences can be seen as a generalization test of language models: can models predict |
|
what kinds of changes to a natural sentence will lead to humans rejecting the sentence as improbable? Humans |
|
are sometimes capable of comprehending language with atypical constructions (e.g. in cases when pragmatic |
|
judgments can be made about a speaker’s intentions from environmental and linguistic context [39]), but none of |
|
the models we tested were fully able to predict which syntactic or semantic perturbations would be accepted or |
|
rejected by humans. One possibility is that stronger next-word prediction models, using different architectures, |
|
learning rules, or training data, might close the gap between models and humans. Alternatively, it might be |
|
that optimizing for other linguistic tasks, or even non-linguistic task demands (in particular, representing the |
|
external world, the self, and other agents) will turn out to be critical for achieving human-like natural language |
|
processing [40]. |
|
3.2 Controversial sentence pairs as adversarial attacks |
|
Machine vision models are highly susceptible to adversarial examples [41, 42]. Such adversarial examples are |
|
typically generated by choosing a correctly classified natural image and then searching for a minuscule (and |
|
therefore human-imperceptible) image perturbation that would change the targeted model’s classification. The |
|
prospect that similar covert model failure modes may exist also for language models has motivated proposed |
|
generalizations of adversarial methods to textual inputs [43]. However, imperceptible perturbations cannot be |
|
applied to written text: any modified word or character is humanly perceptible. Prior work on adversarial |
|
examples for language models have instead relied on heuristic constraints aiming to limit the change in the |
|
meaning of the text, such as flipping a character [44, 45], changing number or gender [46], or replacing words |
|
with synonyms [47–49]. However, since these heuristics are only rough approximations of human language |
|
processing, many of these methods fail to preserve semantic meaning [50]. Interactive (“human-in-the-loop”) |
|
adversarial approaches allow human subjects to repeatedly alter model inputs such that it confuses target models |
|
but not secondary participants [17, 51], but these approaches are inherently slow and costly and are limited by |
|
mental models the human subjects form about the evaluated language models. |
|
By contrast, testing language models on controversial sentence pairs does not require approximating or |
|
querying a human ground truth during optimization—the objective of controversiality is independent of cor- |
|
rectness. Instead, by designing inputs to elicit conflicting predictions among the models and assessing human |
|
responses to these inputs only once the optimization loop has terminated, we capitalize on the simple fact that |
|
if two models disagree with respect to an input, at least one of the models must be making an incorrect predic- |
|
tion. Pitting language models against other language models also can be conducted by other approaches such |
|
as “red-teaming”, where an alternative language model is used as a generator of potential adversarial examples |
|
for a targeted model and a classifier is used to filter the generated examples such that the output they induce |
|
in the targeted model is indeed incorrect [52]. Our approach shares the underlying principle that an alternative |
|
language model can drive a more powerful test than handcrafted heuristics, but here the models have symmetric |
|
roles (there are no “attacking” and “attacked” models) and we can optimize stimuli directly without relying on |
|
filtering. |
|
113.3 Limitations and future directions |
|
While our results demonstrate that using controversial stimuli can identify subtle differences in language mod- |
|
els’ alignment with human judgments, our study was limited in a number of ways. Our stimuli were all 8-word |
|
English sentences, limiting our ability to make cognitively meaningful claims that apply to language use glob- |
|
ally. 8-word sentences are long enough to include common syntactic constructions and convey meaningful |
|
ideas but may not effectively probe long-distance syntactic dependencies [53]. Future work may introduce |
|
additional sentence lengths and languages, as well as (potentially adaptive) controversial sentence optimization |
|
procedures that consider large sets of candidate models, allowing for greater model coverage than our sim- |
|
pler pairwise approach. Future work may also complement the model-comparative experimental design with |
|
procedures designed to identify potential failure modes common to all models. |
|
A more substantial limitation of the current study is that, like any comparison of pre-trained neural networks |
|
as potential models of human cognition, there could be multiple reasons (i.e., training data, architecture, training |
|
tasks, learning rules) why particular models are better aligned with human judgments. For example, as we |
|
did not systematically control the training corpora used for training the models, it is possible that some of |
|
the observed differences are due to differences in the training sets rather than model architecture. Therefore, |
|
while our results expose failed model predictions, they do not readily answer why these failed predictions arise. |
|
Future experiments could compare custom-trained or systematically manipulated models, which reflect specific |
|
hypotheses about human language processing. In Extended Data Fig. 5, we demonstrate the power of using |
|
synthetic controversial stimuli to conduct sensitive comparisons between models with subtle differences in how |
|
sentence probabilities are calculated. |
|
It is important to note that our analyses considered human relative probability judgments as reflecting |
|
a scalar measure of acceptability. We made this assumption in order to bring the language models (which |
|
assign a probability measure to each sentence) and the human participants onto a common footing. However, |
|
it is possible that different types of sentence pairs engage different human cognitive processes. For pairs of |
|
synthetic sentences, both sentences may be unacceptable in different ways (e.g. exhibit different kinds of |
|
grammatical violations), requiring a judgment that weighs the relative importance of multiple dimensions [54] |
|
and could therefore produce inconsistent rankings across participants or across trials [55]. By contrast, asking |
|
participants to compare a natural and a synthetic sentence (Fig. 3b, Extended Data Table 1) may be more |
|
analogous to previous work measuring human acceptability judgments for sentence pairs [24]. Nonetheless, it |
|
is worth noting that for allof the controversial conditions, the noise ceiling was significantly above the models’ |
|
prediction accuracy, indicating non-random human preferences unexplained by current models that should be |
|
accounted for by future models, which may have to be more complex and capture multiple processes. |
|
Finally, the use of synthetic controversial sentences can be extended beyond probability judgments. A |
|
sufficiently strong language model may enable constraining the experimental design search-space to particular |
|
sentence distributions (e.g., movie reviews or medical questions). Given such a constrained space, we may |
|
be able to search for well-formed sentences that elicit contradictory predictions in alternative domain-specific |
|
models (e.g., sentiment classifiers or question-answering models). However, as indicated by our results, the |
|
task of capturing distributions of well-formed sentences is less trivial than it seems. |
|
124 Methods |
|
4.1 Language models |
|
We tested nine models from three distinct classes: n-gram models, recurrent neural networks, and transformers. |
|
The n-gram models were trained with open source code from the Natural Language Toolkit [56], the recurrent |
|
neural networks were trained with architectures and optimization procedures available in PyTorch [57], and |
|
the transformers were implemented with the open-source repository HuggingFace [58]. For full details see |
|
Supplementary Section 6.1. |
|
4.2 Evaluating sentence probabilities in transformer models |
|
We then sought to compute the probability of arbitrary sentences under each of the models described above. |
|
The term “sentence” is used in this context in its broadest sense—a sequence of English words, not necessarily |
|
restricted to grammatical English sentences. Unlike some classification tasks in which valid model predictions |
|
may be expected only for grammatical sentences (e.g., sentiment analysis), the sentence probability comparison |
|
task is defined over the entire domain of eight-word sequences. |
|
For the set of unidirectional models, evaluating sentence probabilities was performed simply by summing |
|
the log probabilities of each successive token in the sentence from left to right, given all the previous tokens. |
|
For bidirectional models, this process was not as straightforward. One challenge is that transformer model |
|
probabilities do not necessarily reflect a coherent joint probability; the summed log sentence probability result- |
|
ing from adding words in one order (e.g. left to right) does not necessarily equal the probability resulting from a |
|
different order (e.g. right to left). Here we developed a novel formulation of bidirectional sentence probabilities |
|
in which we considered all permutations of serial word positions as possible construction orders (analogous to |
|
the random word visitation order used to sample serial reproduction chains, [59]). In practice, we observed that |
|
the distribution of log probabilities resulting from different permutations tends to center tightly around a mean |
|
value (for example, for RoBERTa evaluated with natural sentences, the average coefficient of variation was |
|
approximately 0.059). Therefore in order to efficiently calculate bidirectional sentence probability, we evaluate |
|
100 different random permutations and define the overall sentence log probability as the mean log probability |
|
calculated from each permutation. Specifically, we initialized an eight-word sentence with all tokens replaced |
|
with the “mask” token used in place of to-be-predicted words during model training. We selected a random |
|
permutation Pof positions 1 through 8, and started by computing the probability of the word at first of these |
|
positions P1given the other seven “mask” tokens. We then replaced the “mask” at position P1with the actual |
|
word at this position and computed the probability of the word at P2given the other six “mask” tokens and the |
|
word at P1. This process was repeated until all “mask” tokens had been filled by the corresponding word. |
|
A secondary challenge in evaluating sentence probabilities in bidirectional transformer models stems from |
|
the fact that these models use word-piece tokenizers (as opposed to whole words), and that these tokenizers are |
|
different for different models. For example, one tokenizer might include the word “beehive” as a single token, |
|
while others strive for a smaller library of unique tokens by evaluating “beehive” as the two tokens “bee” and |
|
“hive”. The model probability of a multi-token word—similar to the probability of a multi-word sentence— |
|
may depend on the order in which the chain rule is applied. Therefore, all unique permutations of token order |
|
for each multi-token word were also evaluated within their respective “masks”. For example, the probability of |
|
the word “beehive” would be evaluated as follows: |
|
13logp(w=beehive ) =0.5 |
|
logp(w1=bee|w2=MASK ) + log p(w2=hive|w1=bee) |
|
+0.5 |
|
logp(w2=hive|w1=MASK ) + log p(w1=bee|w2=hive) (1) |
|
This procedure aimed to yield a more fair estimate of the conditional probabilities of word-piece tokens |
|
and therefore the overall probabilities of multi-token words by 1) ensuring that the word-piece tokens were |
|
evaluated within the same context of surrounding words and masks, and 2) eliminating the bias of evaluating |
|
the word-piece tokens in any one particular order in models which were trained to predict bidirectionally. |
|
One more procedure was applied in order to ensure that all models were computing a probability distribution |
|
over sentences with exactly 8 words. When evaluating the conditional probability of a masked word in models |
|
with word-piece tokenizers, we normalized the model probabilities to ensure that only single words were being |
|
considered, rather than splitting the masked tokens into multiple words. At each evaluation step, each token was |
|
restricted to come from one of four normalized distributions: i) single-mask words were restricted to be tokens |
|
with appended white space, ii) masks at the beginning of a word were restricted to be tokens with preceding |
|
white space (in models with preceding white space, e.g. BERT), iii) masks at the end of words were restricted |
|
to be tokens with trailing white space (in models with trailing white space, e.g. XLM), and iv) masks in the |
|
middle of words were restricted to tokens with no appended white space. |
|
4.3 Assessing potential token count effects on sentence probabilities |
|
Note that, because tokenization schemes varied across models, the number of tokens in a sentence could dif- |
|
fer for different models. These alternative tokenizations can be conceived of as different factorizations of the |
|
modeled language distribution, changing how a sentence’s log probability is additively partitioned across the |
|
conditional probability chain (but not affecting its overall probability) [60]. Had we attempted to normalize |
|
across models by dividing the log probability by the number of tokens, as is often done when aligning model |
|
predictions to human acceptability ratings [12, 13], our probabilities would have become strongly tokenization- |
|
dependent [60]. To empirically confirm that tokenization differences were not driving our results, we sta- |
|
tistically compared the token counts of each model’s preferred synthetic sentences with the token counts of |
|
their non-preferred counterparts. While we found significant differences for some of the models, there was |
|
no systematic association between token count and model sentence preferences (Supplementary Table S1). In |
|
particular, lower sentence probabilities were not systematically confounded by higher token counts. |
|
4.4 Defining a shared vocabulary |
|
To facilitate the sampling, selection, and synthesis of sentences that could be evaluated by all of the candidate |
|
models, we defined a shared vocabulary of 29,157unique words. Defining this vocabulary was necessary in |
|
order to unify the space of possible sentences between the transformer models (which can evaluate any input |
|
due to their word-piece tokenizers) and the neural network and n-gram models (which include whole words as |
|
tokens), and to ensure we only included words that were sufficiently prevalent in the training corpora for all |
|
models. The vocabulary consisted of the words in the subtlex database [61], after removing words that occurred |
|
fewer than 300 times in the 300M word corpus (see Supplementary Section 6.1) used to train the n-gram and |
|
recurrent neural network models (i.e., with frequencies lower than one in a million). |
|
144.5 Sampling of natural sentences |
|
Natural sentences were sampled from the same four text sources used to construct the training corpus for the |
|
n-gram and recurrent neural network models, while ensuring that there was no overlap between training and |
|
testing sentences. Sentences were filtered to include only those with eight distinct words and no punctuation |
|
aside from periods, exclamation points, or question marks at the end of a sentence. Then, all eight-word |
|
sentences were further filtered to include only the words included in the shared vocabulary and to exclude those |
|
included in a predetermined list of inappropriate words and phrases. To identify controversial pairs of natural |
|
sentences, we used integer linear programming to search for sentences that had above-median probability in |
|
one model and minimum probability rank in another model (see Supplementary Section 6.2). |
|
4.6 Generating synthetic controversial sentence pairs |
|
For each pair of models, we synthesized 100 sentence triplets. Each triplet was initialized with a natural |
|
sentence n(sampled from Reddit). The words in sentence nwere iteratively modified to generate a synthetic |
|
sentence with reduced probability according to the first model but not according to the second model. This |
|
process was repeated to generate another synthetic sentence from n, in which the roles of the two models |
|
were reversed. Conceptually, this approach resembles Maximum Differentiation (MAD) competition [62], |
|
introduced to compare models of image quality assessment. Each synthetic sentence was generated as a solution |
|
for a constrained minimization problem: |
|
s∗= argmin |
|
slogp(s|mreject ) |
|
subject to logp(s|maccept )≥logp(n|maccept )(2) |
|
mreject denotes the model targeted to assign reduced sentence probability to the synthetic sentence compared |
|
to the natural sentence, and maccept denotes the model targeted to maintain a synthetic sentence probability |
|
greater or equal to that of the natural sentence. For one synthetic sentence, one model served as maccept and |
|
the other model served as mreject , and for the other synthetic sentence the model roles were flipped. |
|
At each optimization iteration, we selected one of the eight words pseudorandomly (so that all eight posi- |
|
tions would be sampled Ntimes before any position would be sampled N+ 1times) and searched the shared |
|
vocabulary for the replacement word that would minimize the logp(s|mreject )under the constraint. We ex- |
|
cluded potential replacement words that already appeared in the sentence, except for a list of 42 determiners and |
|
prepositions such as “the”, “a”, or “with”, which were allowed to repeat. The sentence optimization procedure |
|
was concluded once eight replacement attempts (i.e., words for which no loss-reducing replacement has been |
|
found) have failed in a row. |
|
4.7 Word-level search for bidirectional models |
|
For models for which the evaluation of logp(s|m)is computationally cheap (2-gram, 3-gram, LSTM, and |
|
the RNN), we directly evaluated the log-probability of the 29,157sentences resulting from each of the 29,157 |
|
possible word replacements. When such probability vectors were available for both models, we simply chose |
|
the replacement minimizing the loss. For GPT-2, whose evaluation is slower, we evaluated sentence probabil- |
|
ities only for word replacements for which the new word had a conditional log-probability (given the previous |
|
words in the sentence) of no less than −10; in rare cases when this threshold yielded fewer than 10 candidate |
|
words, we reduced the threshold in steps of 5until there were at least 10 words above the threshold. For the |
|
15bi-directional models (BERT, RoBERTa, XLM, and ELECTRA), for which the evaluation of logp(s|m)is |
|
costly even for a single sentence, we used a heuristic to prioritize which replacements to evaluate. |
|
Since bi-directional models are trained as masked language models, they readily provide word-level com- |
|
pletion probabilities. These word-level log-probabilities typically have positive but imperfect correlation with |
|
the log-probabilities of the sentences resulting from each potential completion. We hence formed a simple |
|
linear regression-based estimate of logp(s{i} ← w|m), the log-probability of the sentence swith word w |
|
assigned at position i, predicting it from logp(s{i}=w|m, s{i} ←mask ), the completion log-probability |
|
of word wat position i, given the sentence with the i-th word masked: |
|
log ˆp(s{i} ←w|m) =β1logp(s{i}=w|m, s{i} ←mask ) +β0 (3) |
|
This regression model was estimated from scratch for each word-level search. When a word was first |
|
selected for being replaced, the log-probability of two sentences was evaluated: the sentence resulting from |
|
substituting the existing word with the word with the highest completion probability and the sentence resulting |
|
from substituting the existing word with the word with the lowest completion probability. These two word- |
|
sentence log-probability pairs, as well as the word-sentence log-probability pair pertaining to the current word, |
|
were used to fit the regression line. The regression prediction, together with the sentence probability for the |
|
other model (either the exact probability, or approximate probability if the other model was also bi-directional) |
|
was used to predict logp(s|mreject )for each of the 29,157potential replacements. We then evaluated the |
|
true (non-approximate) sentence probabilities of the replacement word with the minimal predicted probability. |
|
If this word indeed reduced the sentence probability, it was chosen to serve as the replacement and the word- |
|
level search was terminated (i.e., proceeding to search a replacement for another word in the sentence). If it |
|
did not reduce the probability, the regression model (Eq. 3) was updated with the new observation, and the |
|
next replacement expected to minimize the sentence probability was evaluated. This word-level search was |
|
terminated after five sentence evaluations that did not reduce the loss. |
|
4.8 Selecting the best triplets from the optimized sentences |
|
Since the discrete hill-climbing procedure described above is highly local, the degree to which this succeeded in |
|
producing highly-controversial pairs varied depending on the starting sentence n. We found that typically, nat- |
|
ural sentences with lower than average log-probability gave rise to synthetic sentences with greater controver- |
|
siality. To better represent the distribution of natural sentences while still choosing the best (most controversial) |
|
triplets for human testing, we used stratified selection. |
|
First, we quantified the controversiality of each triplet as |
|
cm1,m2(n, s1, s2) = logp(n|m1) |
|
p(s1|m1)+ logp(n|m2) |
|
p(s2|m2), (4) |
|
where s1is the sentence generated to reduce the probability in model m1ands2is the sentence generated to |
|
reduce the probability in model m2. |
|
We employed integer programming to choose the 10 most controversial triplets from the 100 triplets opti- |
|
mized for each model pair (maximizing the total controversiality across the selected triplets), while ensuring |
|
that for each model, there was exactly one natural sentence in each decile of the natural sentences probability |
|
distribution. The selected 10 synthetic triplets were then used to form 30 unique experimental trials per model |
|
pair, comparing the natural sentence with one synthetic sentence, comparing the natural sentence with the other |
|
synthetic sentence, and comparing the two synthetic sentences. |
|
164.9 Design of the human experiment |
|
Our experimental procedures were approved by the Columbia University Institutional Review Board (protocol |
|
number IRB-AAAS0252) and were performed in accordance with the approved protocol. All participants |
|
provided informed consent prior. We presented the controversial sentence pairs selected and synthesized by |
|
the language models to 100 native English-speaking, US-based participants (55 male) recruited from Prolific |
|
(www.prolific .co), and paid each participant $5.95. The average participant age was 34.08 ± 12.32. The |
|
subjects were divided into 10 groups, and each ten-subject group was presented with a unique set of stimuli. |
|
Each stimulus set contained exactly one sentence pair from every possible combination of model pairs and |
|
the four main experimental conditions: selected controversial sentence pairs; natural vs. synthetic pair, where |
|
one model served as maccept and the other as mreject ; a natural vs. synthetic pair with the reverse model |
|
role assignments; and directly pairing the two synthetic sentences. These model-pair-condition combinations |
|
accounted for 144 (36 ×4) trials of the task. In addition to these trials, each stimulus set also included nine trials |
|
consisting of sentence pairs randomly sampled from the database of eight-word sentences (and not already |
|
included in any of the other conditions). All subjects also viewed 12 control trials consisting of a randomly |
|
selected natural sentence and the same natural sentence with the words scrambled in a random order. The order |
|
of trials within each stimulus set as well as the left-right screen position of sentences in each sentence pair |
|
were randomized for all participants. While each sentence triplet produced by the optimization procedure (see |
|
subsection “Generating synthetic controversial sentence pairs”) gave rise to three trials, these were allocated |
|
such that no subject viewed the same sentence twice. |
|
On each trial of the task, participants were asked to make a binary decision about which of the two sentences |
|
they considered more probable (for the full set of instructions given to participants, see Supplementary Fig. S2). |
|
In addition, they were asked to indicate one of three levels of confidence in their decision: somewhat confident, |
|
confident, or very confident. The trials were not timed, but a 90-minute time limit was enforced for the whole |
|
experiment. A progress bar at the bottom of the screen indicated to participants how many trials they had |
|
completed and had remaining to complete. |
|
We rejected the data of 21 participants who failed to choose the original, unshuffled sentence in at least |
|
11 of the 12 control trials, and acquired data from 21 alternative participants instead, all of whom passed this |
|
data-quality threshold. In general, we observed high agreement in sentence preferences among our participants, |
|
though the level of agreement varied across conditions. There was complete or near-complete agreement (at |
|
least 9/10 participants with the same binary sentence preference) in 52.2% of trials for randomly-sampled |
|
natural-sentence pairs, 36.6% of trials for controversial natural-sentence pairs, 67.6% of trials for natural- |
|
synthetic pairs, and 60.0% of trials for synthetic-synthetic pairs (versus a chance rate of 1.1%, assuming a |
|
binomial distribution with p= 0.5). |
|
4.10 Evaluation of model-human consistency |
|
To measure the alignment on each trial between model judgments and human judgments, we binarized both |
|
measures; we determined which of the two sentences was assigned with a higher probability by the model, |
|
regardless of the magnitude of the probability difference, and which of the two sentences was favored by the |
|
subject, regardless of the reported confidence level. When both the subject and the model chose the same |
|
sentence, the trial was considered as correctly predicted by that model. This correctness measure was averaged |
|
across sentence pairs and across the 10 participants who viewed the same set of trials. For the lower bound on |
|
the noise ceiling, we predicted each subject’s choices from a majority vote of the nine other subjects who were |
|
presented with the same trials. For the upper bound (i.e., the highest possible accuracy attainable on this data |
|
17sample), we included the subject themselves in this majority vote-based prediction. |
|
Since each of the 10 participant groups viewed a unique trial set, these groups provided 10 independent |
|
replications of the experiment. Models were compared to each other and to the lower bound of the noise |
|
ceiling by a Wilcoxon signed-rank test using these 10 independent accuracy outcomes as paired samples. For |
|
each analysis, the false discovery rate across multiple comparisons was controlled by the Benjamini-Hochberg |
|
procedure [63]. |
|
In Fig. 4, we instead measure model-human consistency in a more continuous way, comparing the sentence |
|
probability ratio in a model to the graded Likert ratings provided by humans; see Supplementary Section 6.3 |
|
for full details. |
|
4.11 Selecting trials for model evaluation |
|
All of the randomly sampled natural-sentence pairs (Fig. 1a) were evaluated for each of the candidate models. |
|
Controversial sentence pairs (either natural, Fig. 1b or synthetic, Fig. 3) were included in a model’s evaluation |
|
set only if they were formed to target that model specifically. The overall summary analysis (Fig. 4) evaluated |
|
all models on all available sentence pairs. |
|
4.12 Comparison to pseudo-log-likelihood acceptability measures |
|
Wang & Cho [64] proposed an alternative approach for computing sentence probabilities in bidirectional |
|
(BERT-like) models, using a pseudo-log-likelihood measure that simply sums the log-probability of each token |
|
conditioned on all of the other tokens in the sentence. While this measure does not reflect a true probability |
|
distribution [65], it is positively correlated with human acceptability judgments for several bidirectional models |
|
[13, 66]. To directly compare this existing approach to our novel method for computing probabilities, we again |
|
used the method of controversial sentence pairs to identify the approach most aligned with human judgments. |
|
For each bidirectional model (BERT, RoBERTa, and ELECTRA), we created two copies of the model, each us- |
|
ing a different approach for computing sentence probabilities. We synthesized 40 sentence pairs to maximally |
|
differentiate between the two copies of each model, with each copy assigning a higher probability to a different |
|
sentence in the pair. Subsequently, we tested 30 human participants, presenting each participant with all 120 |
|
sentence pairs. Model-human consistency was quantified as in the main experiment. |
|
4.13 Data and code availability |
|
The experimental stimuli, detailed behavioral testing results, sentence optimization code, and code for repro- |
|
ducing all analyses and figures are available at github.com/dpmlab/contstimlang [67]. |
|
Acknowledgments |
|
This material is based upon work partially supported by the National Science Foundation under Grant No. |
|
1948004 to NK. This publication was made possible with the support of the Charles H. Revson Foundation to |
|
TG. The statements made and views expressed, however, are solely the responsibility of the authors. |
|
18Author Contributions |
|
T.G., M.S., N.K., and C.B. designed the study. M.S. implemented the computational models and T.G. imple- |
|
mented the sentence pair optimization procedures. M.S. conducted the behavioral experiments. T.G. and M.S. |
|
analyzed the experiments’ results. T.G., M.S., N.K., and C.B. wrote the paper. |
|
Competing Interests |
|
The authors declare no competing interests. |
|
References |
|
1. Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by back-propagating errors. |
|
Nature 323, 533–536. doi: 10.1038/323533a0 (1986). |
|
2. Hochreiter, S. & Schmidhuber, J. Long Short-Term Memory. Neural Computation 9, 1735–1780. doi: 10. |
|
1162/neco.1997.9.8.1735 (1997). |
|
3. Devlin, J., Chang, M., Lee, K. & Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers |
|
for Language Understanding inProceedings of the 2019 Conference of the North American Chapter of |
|
the Association for Computational Linguistics: Human Language Technologies (Minneapolis, MN, USA, |
|
2019), 4171–4186. doi: 10.18653/v1/n19-1423 . |
|
4. Liu, Y ., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L. & Stoyanov, |
|
V . RoBERTa: A Robustly Optimized BERT Pretraining Approach. Preprint at https://arxiv.org/ |
|
abs/1907.11692 (2019). |
|
5. Conneau, A. & Lample, G. Cross-lingual Language Model Pretraining inAdvances in Neural Information |
|
Processing Systems 32 (Vancouver, BC, Canada, 2019). URL:proceedings.neurips.cc/paper/ |
|
2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf . |
|
6. Clark, K., Luong, M., Le, Q. V . & Manning, C. D. ELECTRA: Pre-training Text Encoders as Discrimina- |
|
tors Rather Than Generators in8th International Conference on Learning Representations, ICLR 2020 |
|
(Online, 2020). URL:openreview.net/forum?id=r1xMH1BtvB . |
|
7. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. Language models are un- |
|
supervised multitask learners 2019. URL:cdn.openai.com/better- language- models/ |
|
language_models_are_unsupervised_multitask_learners.pdf . |
|
8. Goodkind, A. & Bicknell, K. Predictive power of word surprisal for reading times is a linear function of |
|
language model quality inProceedings of the 8th Workshop on Cognitive Modeling and Computational |
|
Linguistics (CMCL 2018) (Salt Lake City, Utah, 2018), 10–18. doi: 10.18653/v1/W18-0102 . |
|
9. Shain, C., Blank, I. A., van Schijndel, M., Schuler, W. & Fedorenko, E. fMRI reveals language-specific |
|
predictive coding during naturalistic sentence comprehension. Neuropsychologia 138, 107307. doi: 10. |
|
1016/j.neuropsychologia.2019.107307 (2020). |
|
10. Broderick, M. P., Anderson, A. J., Di Liberto, G. M., Crosse, M. J. & Lalor, E. C. Electrophysiologi- |
|
cal correlates of semantic dissimilarity reflect the comprehension of natural, narrative speech. Current |
|
Biology 28, 803–809. doi: 10.1016/j.cub.2018.01.080 (2018). |
|
1911. Goldstein, A., Zada, Z., Buchnik, E., Schain, M., Price, A., Aubrey, B., Nastase, S. A., Feder, A., Emanuel, |
|
D., Cohen, A., Jansen, A., Gazula, H., Choe, G., Rao, A., Kim, C., Casto, C., Fanda, L., Doyle, W., |
|
Friedman, D., Dugan, P., Melloni, L., Reichart, R., Devore, S., Flinker, A., Hasenfratz, L., Levy, O., |
|
Hassidim, A., Brenner, M., Matias, Y ., Norman, K. A., Devinsky, O. & Hasson, U. Shared computational |
|
principles for language processing in humans and deep language models. Nature Neuroscience 25, 369– |
|
380. doi: 10.1038/s41593-022-01026-4 (2022). |
|
12. Lau, J. H., Clark, A. & Lappin, S. Grammaticality, Acceptability, and Probability: A Probabilistic View |
|
of Linguistic Knowledge. Cognitive Science 41, 1202–1241. doi: 10.1111/cogs.12414 (2017). |
|
13. Lau, J. H., Armendariz, C., Lappin, S., Purver, M. & Shu, C. How Furiously Can Colorless Green Ideas |
|
Sleep? Sentence Acceptability in Context. Transactions of the Association for Computational Linguistics |
|
8, 296–310. doi: 10.1162/tacl_a_00315 (2020). |
|
14. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O. & Bowman, S. R. GLUE: A Multi-Task Benchmark |
|
and Analysis Platform for Natural Language Understanding in7th International Conference on Learning |
|
Representations, ICLR 2019, (New Orleans, LA, USA, 2019). URL:openreview.net/forum?id= |
|
rJ4km2R5t7 . |
|
15. Wang, A., Pruksachatkun, Y ., Nangia, N., Singh, A., Michael, J., Hill, F., Levy, O. & Bowman, S. Su- |
|
perGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems inAdvances |
|
in Neural Information Processing Systems 32 (Vancouver, BC, Canada, 2019). URL:proceedings. |
|
neurips.cc/paper/2019/file/4496bf24afe7fab6f046bf4923da8de6-Paper.pdf . |
|
16. Warstadt, A., Parrish, A., Liu, H., Mohananey, A., Peng, W., Wang, S.-F. & Bowman, S. R. BLiMP: The |
|
Benchmark of Linguistic Minimal Pairs for English. Transactions of the Association for Computational |
|
Linguistics 8, 377–392. doi: 10.1162/tacl_a_00321 (2020). |
|
17. Kiela, D., Bartolo, M., Nie, Y ., Kaushik, D., Geiger, A., Wu, Z., Vidgen, B., Prasad, G., Singh, A., Ring- |
|
shia, P., Ma, Z., Thrush, T., Riedel, S., Waseem, Z., Stenetorp, P., Jia, R., Bansal, M., Potts, C. & Williams, |
|
A.Dynabench: Rethinking Benchmarking in NLP inProceedings of the 2021 Conference of the North |
|
American Chapter of the Association for Computational Linguistics: Human Language Technologies |
|
(Online, 2021), 4110–4124. doi: 10.18653/v1/2021.naacl-main.324 . |
|
18. Box, G. E. & Hill, W. J. Discrimination Among Mechanistic Models. Technometrics 9, 57–71. doi: 10. |
|
1080/00401706.1967.10490441 (1967). |
|
19. Golan, T., Raju, P. C. & Kriegeskorte, N. Controversial stimuli: Pitting neural networks against each other |
|
as models of human cognition. Proceedings of the National Academy of Sciences 117, 29330–29337. |
|
doi:10.1073/pnas.1912334117 (2020). |
|
20. Cross, D. V . Sequential dependencies and regression in psychophysical judgments. Perception & Psy- |
|
chophysics 14, 547–552. doi: 10.3758/BF03211196 (1973). |
|
21. Foley, H. J., Cross, D. V . & O’reilly, J. A. Pervasiveness and magnitude of context effects: Evidence |
|
for the relativity of absolute magnitude estimation. Perception & Psychophysics 48, 551–558. doi: 10. |
|
3758/BF03211601 (1990). |
|
22. Petzschner, F. H., Glasauer, S. & Stephan, K. E. A Bayesian perspective on magnitude estimation. Trends |
|
in Cognitive Sciences 19, 285–293. doi: 10.1016/j.tics.2015.03.002 (2015). |
|
23. Greenbaum, S. Contextual Influence on Acceptability Judgments. Linguistics 15. doi: 10.1515/ling. |
|
1977.15.187.5 (1977). |
|
2024. Sch ¨utze, C. T. & Sprouse, J. in (eds Podesva, R. J. & Sharma, D.) 27–50 (Cambridge University Press, |
|
Cambridge, 2014). doi: 10.1017/CBO9781139013734.004 . |
|
25. Sprouse, J. & Almeida, D. Design sensitivity and statistical power in acceptability judgment experiments. |
|
Glossa 2, 14. doi: 10.5334/gjgl.236 (2017). |
|
26. Lindsay, G. W. Convolutional Neural Networks as a Model of the Visual System: Past, Present, and |
|
Future. Journal of Cognitive Neuroscience 33, 2017–2031. doi: 10.1162/jocn_a_01544 (2021). |
|
27. Wehbe, L., Vaswani, A., Knight, K. & Mitchell, T. Aligning context-based statistical models of language |
|
with brain activity during reading inProceedings of the 2014 Conference on Empirical Methods in Nat- |
|
ural Language Processing (EMNLP) (Doha, Qatar, 2014), 233–243. doi: 10.3115/v1/D14-1030 . |
|
28. Toneva, M. & Wehbe, L. Interpreting and improving natural-language processing (in machines) with |
|
natural language-processing (in the brain) inAdvances in Neural Information Processing Systems 32 |
|
(Vancouver, BC, Canada, 2019). URL:proceedings . neurips . cc / paper / 2019 / file / |
|
749a8e6c231831ef7756db230b4359c8-Paper.pdf . |
|
29. Heilbron, M., Armeni, K., Schoffelen, J.-M., Hagoort, P. & De Lange, F. P. A hierarchy of linguistic |
|
predictions during natural language comprehension. Proceedings of the National Academy of Sciences |
|
119, e2201968119. doi: 10.1073/pnas.2201968119 (2022). |
|
30. Jain, S., V o, V ., Mahto, S., LeBel, A., Turek, J. S. & Huth, A. Interpretable multi-timescale models for |
|
predicting fMRI responses to continuous natural speech inAdvances in Neural Information Processing |
|
Systems 33 (Online, 2020), 13738–13749. URL:proceedings.neurips.cc/paper_files/ |
|
paper/2020/file/9e9a30b74c49d07d8150c8c83b1ccf07-Paper.pdf . |
|
31. Lyu, B., Marslen-Wilson, W. D., Fang, Y . & Tyler, L. K. Finding structure in time: Humans, machines, |
|
and language. bioRxiv. Preprint at https://www.biorxiv.org/content/10.1101/2021. |
|
10.25.465687v2 (2021). |
|
32. Schrimpf, M., Blank, I. A., Tuckute, G., Kauf, C., Hosseini, E. A., Kanwisher, N., Tenenbaum, J. B. & |
|
Fedorenko, E. The neural architecture of language: Integrative modeling converges on predictive pro- |
|
cessing. Proceedings of the National Academy of Sciences 118, e2105646118. doi: 10.1073/pnas. |
|
2105646118 (2021). |
|
33. Wilcox, E., Vani, P. & Levy, R. A Targeted Assessment of Incremental Processing in Neural Language |
|
Models and Humans inProceedings of the 59th Annual Meeting of the Association for Computational |
|
Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: |
|
Long Papers) (Online, 2021), 939–952. doi: 10.18653/v1/2021.acl-long.76 . |
|
34. Caucheteux, C. & King, J.-R. Brains and algorithms partially converge in natural language processing. |
|
Communications Biology 5, 134. doi: 10.1038/s42003-022-03036-1 (2022). |
|
35. Arehalli, S., Dillon, B. & Linzen, T. Syntactic Surprisal From Neural Models Predicts, But Underesti- |
|
mates, Human Processing Difficulty From Syntactic Ambiguities inProceedings of the 26th Conference |
|
on Computational Natural Language Learning (CoNLL) (Abu Dhabi, United Arab Emirates (Hybrid), |
|
2022), 301–313. doi: 10.18653/v1/2022.conll-1.20 . |
|
36. Merkx, D. & Frank, S. L. Human Sentence Processing: Recurrence or Attention? Proceedings of the |
|
Workshop on Cognitive Modeling and Computational Linguistics. doi:10.18653/v1/2021.cmcl- |
|
1.2.URL:dx.doi.org/10.18653/v1/2021.cmcl-1.2 (2021). |
|
2137. Michaelov, J. A., Bardolph, M. D., Coulson, S. & Bergen, B. K. Different kinds of cognitive plausibility: |
|
why are transformers better than RNNs at predicting N400 amplitude? inProceedings of the Annual Meet- |
|
ing of the Cognitive Science Society 43 (2021). URL:escholarship.org/uc/item/9z06m20f . |
|
38. Rakocevic, L. I. Synthesizing controversial sentences for testing the brain-predictivity of language models |
|
https://hdl.handle.net/1721.1/130713 . PhD thesis (Massachusetts Institute of Technol- |
|
ogy, 2021). |
|
39. Goodman, N. D. & Frank, M. C. Pragmatic Language Interpretation as Probabilistic Inference. Trends in |
|
Cognitive Sciences 20, 818–829. doi: 10.1016/j.tics.2016.08.005 (2016). |
|
40. Howell, S. R., Jankowicz, D. & Becker, S. A model of grounded language acquisition: Sensorimotor |
|
features improve lexical and grammatical learning. Journal of Memory and Language 53, 258–276. |
|
doi:https://doi.org/10.1016/j.jml.2005.03.002 (2005). |
|
41. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. & Fergus, R. Intriguing |
|
properties of neural networks Preprint at http://arxiv.org/abs/1312.6199 . 2013. |
|
42. Goodfellow, I. J., Shlens, J. & Szegedy, C. Explaining and Harnessing Adversarial Examples in3rd |
|
International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, |
|
Conference Track Proceedings (2015). URL:arxiv.org/abs/1412.6572 . |
|
43. Zhang, W. E., Sheng, Q. Z., Alhazmi, A. & Li, C. Adversarial attacks on deep-learning models in natural |
|
language processing: A survey. ACM Transactions on Intelligent Systems and Technology (TIST) 11, 1– |
|
41. doi: 10.1145/3374217 (2020). |
|
44. Liang, B., Li, H., Su, M., Bian, P., Li, X. & Shi, W. Deep Text Classification Can be Fooled inProceedings |
|
of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18 (Stockholm, |
|
Sweden, 2018), 4208–4215. doi: 10.24963/ijcai.2018/585 . |
|
45. Ebrahimi, J., Rao, A., Lowd, D. & Dou, D. HotFlip: White-Box Adversarial Examples for Text Classifica- |
|
tioninProceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume |
|
2: Short Papers) (Melbourne, Australia, 2018), 31–36. doi: 10.18653/v1/P18-2006 . |
|
46. Abdou, M., Ravishankar, V ., Barrett, M., Belinkov, Y ., Elliott, D. & Søgaard, A. The Sensitivity of Lan- |
|
guage Models and Humans to Winograd Schema Perturbations inProceedings of the 58th Annual Meet- |
|
ing of the Association for Computational Linguistics (Online, 2020), 7590–7604. doi: 10.18653/v1/ |
|
2020.acl-main.679 . |
|
47. Alzantot, M., Sharma, Y ., Elgohary, A., Ho, B.-J., Srivastava, M. & Chang, K.-W. Generating Natural |
|
Language Adversarial Examples inProceedings of the 2018 Conference on Empirical Methods in Natural |
|
Language Processing (Brussels, Belgium, 2018), 2890–2896. doi: 10.18653/v1/D18-1316 . |
|
48. Ribeiro, M. T., Singh, S. & Guestrin, C. Semantically Equivalent Adversarial Rules for Debugging NLP |
|
models inProceedings of the 56th Annual Meeting of the Association for Computational Linguistics |
|
(Volume 1: Long Papers) (Melbourne, Australia, 2018), 856–865. doi: 10.18653/v1/P18-1079 . |
|
49. Ren, S., Deng, Y ., He, K. & Che, W. Generating Natural Language Adversarial Examples through Prob- |
|
ability Weighted Word Saliency inProceedings of the 57th Annual Meeting of the Association for Com- |
|
putational Linguistics (Florence, Italy, 2019), 1085–1097. doi: 10.18653/v1/P19-1103 . |
|
50. Morris, J., Lifland, E., Lanchantin, J., Ji, Y . & Qi, Y . Reevaluating Adversarial Examples in Natural |
|
Language inFindings of the Association for Computational Linguistics: EMNLP 2020 (Online, 2020), |
|
3829–3839. doi: 10.18653/v1/2020.findings-emnlp.341 . |
|
2251. Wallace, E., Rodriguez, P., Feng, S., Yamada, I. & Boyd-Graber, J. Trick Me If You Can: Human-in-the- |
|
Loop Generation of Adversarial Examples for Question Answering. Transactions of the Association for |
|
Computational Linguistics 7, 387–401. doi: 10.1162/tacl_a_00279 (2019). |
|
52. Perez, E., Huang, S., Song, F., Cai, T., Ring, R., Aslanides, J., Glaese, A., McAleese, N. & Irving, G. |
|
Red Teaming Language Models with Language Models inProceedings of the 2022 Conference on Em- |
|
pirical Methods in Natural Language Processing (Abu Dhabi, United Arab Emirates, 2022), 3419–3448. |
|
doi:10.18653/v1/2022.emnlp-main.225 . |
|
53. Gibson, E. Linguistic complexity: locality of syntactic dependencies. Cognition 68, 1–76. doi: 10.1016/ |
|
S0010-0277(98)00034-1 (1998). |
|
54. Watt, W. C. The indiscreteness with which impenetrables are penetrated. Lingua 37, 95–128. doi: 10. |
|
1016/0024-3841(75)90046-7 (1975). |
|
55. Sch ¨utze, C. T. The empirical base of linguistics. Grammaticality judgments and linguistic methodology |
|
Classics in Linguistics 2.doi:10.17169/langsci.b89.100 (Language Science Press, Berlin, |
|
2016). |
|
56. Bird, S., Klein, E. & Loper, E. Natural language processing with Python: analyzing text with the natural |
|
language toolkit (”O’Reilly Media, Inc.”, Sebastopol, CA, USA, 2009). |
|
57. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, |
|
N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, |
|
S., Steiner, B., Fang, L., Bai, J. & Chintala, S. PyTorch: An Imperative Style, High-Performance Deep |
|
Learning Library inAdvances in Neural Information Processing Systems 32 (Vancouver, BC, Canada, |
|
2019), 8024–8035. URL:papers.neurips.cc/paper/9015-pytorch-an-imperative- |
|
style-high-performance-deep-learning-library.pdf . |
|
58. Wolf, T., Debut, L., Sanh, V ., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Fun- |
|
towicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y ., Plu, J., Xu, C., Scao, T. L., Gugger, |
|
S., Drame, M., Lhoest, Q. & Rush, A. M. Transformers: State-of-the-Art Natural Language Processing |
|
inProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System |
|
Demonstrations (Online, 2020), 38–45. doi: 10.18653/v1/2020.emnlp-demos.6 . |
|
59. Yamakoshi, T., Griffiths, T. & Hawkins, R. Probing BERT’s priors with serial reproduction chains in |
|
Findings of the Association for Computational Linguistics: ACL 2022 (Dublin, Ireland, 2022), 3977– |
|
3992. doi: 10.18653/v1/2022.findings-acl.314 . |
|
60. Chestnut, S. Perplexity Accessed: 2022-09-23. 2019. URL:drive.google.com/uc?export= |
|
download&id=1gSNfGQ6LPxlNctMVwUKrQpUA7OLZ83PW . |
|
61. Van Heuven, W. J. B., Mandera, P., Keuleers, E. & Brysbaert, M. Subtlex-UK: A New and Improved Word |
|
Frequency Database for British English. Quarterly Journal of Experimental Psychology 67, 1176–1190. |
|
doi:10.1080/17470218.2013.850521 (2014). |
|
62. Wang, Z. & Simoncelli, E. P. Maximum differentiation (MAD) competition: A methodology for compar- |
|
ing computational models of perceptual quantities. Journal of Vision 8, 8–8. doi: 10.1167/8.12.8 |
|
(2008). |
|
63. Benjamini, Y . & Hochberg, Y . Controlling the False Discovery Rate: A Practical and Powerful Approach |
|
to Multiple Testing. Journal of the Royal Statistical Society: Series B (Methodological) 57, 289–300. |
|
doi:10.1111/j.2517-6161.1995.tb02031.x (1995). |
|
2364. Wang, A. & Cho, K. BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language |
|
Model inProceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language |
|
Generation (Minneapolis, Minnesota, 2019), 30–36. doi: 10.18653/v1/W19-2304 . |
|
65. Cho, K. BERT has a Mouth and must Speak, but it is not an MRF kyunghyuncho.me/bert-has- |
|
a-mouth-and-must-speak-but-it-is-not-an-mrf/ . Accessed: 2022-09-28. 2019. |
|
66. Salazar, J., Liang, D., Nguyen, T. Q. & Kirchhoff, K. Masked Language Model Scoring inProceedings |
|
of the 58th Annual Meeting of the Association for Computational Linguistics (Online, 2020), 2699–2712. |
|
doi:10.18653/v1/2020.acl-main.240 . |
|
67. Golan, T., Siegelman, M., Kriegeskorte, N. & Baldassano, C. Code and data for ”Testing the limits of |
|
natural language models for predicting human language judgments” version 1.2.2. 2023. doi: 10.5281/ |
|
zenodo.8147166 . |
|
68. Shannon, C. E. A mathematical theory of communication. The Bell System Technical Journal 27, 379– |
|
423. doi: 10.1002/j.1538-7305.1948.tb01338.x (1948). |
|
69. Irvine, A., Langfus, J. & Callison-Burch, C. The American Local News Corpus inProceedings of the Ninth |
|
International Conference on Language Resources and Evaluation (LREC’14) (Reykjavik, Iceland, 2014), |
|
1305–1308. URL:www.lrec-conf.org/proceedings/lrec2014/pdf/914_Paper.pdf . |
|
70. Kneser, R. & Ney, H. Improved backing-off for m-gram language modeling in1995 international confer- |
|
ence on acoustics, speech, and signal processing 1 (1995), 181–184. doi: 10.1109/ICASSP.1995. |
|
479394 . |
|
71. Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual 2021. URL:www.gurobi.com . |
|
72. Woodbury, M. A. Rank Correlation when There are Equal Variates. The Annals of Mathematical Statistics |
|
11, 358–362. URL:www.jstor.org/stable/2235684 (1940). |
|
73. Sch ¨utt, H. H., Kipnis, A. D., Diedrichsen, J. & Kriegeskorte, N. Statistical inference on representational |
|
geometries. eLife 12 (eds Serences, J. T. & Behrens, T. E.) e82566. doi: 10.7554/eLife.82566 |
|
(2023). |
|
74. Nili, H., Wingfield, C., Walther, A., Su, L., Marslen-Wilson, W. & Kriegeskorte, N. A Toolbox for Rep- |
|
resentational Similarity Analysis. PLOS Computational Biology 10, 1–11. doi: 10.1371/journal. |
|
pcbi.1003553 (2014). |
|
75. Pennington, J., Socher, R. & Manning, C. GloVe: Global Vectors for Word Representation inProceedings |
|
of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (Doha, Qatar, |
|
2014), 1532–1543. doi: 10.3115/v1/D14-1162 . |
|
76. Frank, S. L. & Willems, R. M. Word predictability and semantic similarity show distinct patterns of |
|
brain activity during language comprehension. Language, Cognition and Neuroscience 32, 1192–1203. |
|
doi:10.1080/23273798.2017.1323109 (2017). |
|
5 Extended Data |
|
24Extended Data Figure 1: An example of one experimental trial, as presented to the participants . The participant must |
|
choose one sentence while providing their confidence rating on a 3-point scale. |
|
GPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-grammodel 2 |
|
GPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-grammodel 10.93 0.88 0.86 0.83 0.87 0.76 0.82 0.78 |
|
0.93 0.90 0.83 0.88 0.89 0.80 0.84 0.82 |
|
0.88 0.90 0.87 0.84 0.90 0.83 0.90 0.86 |
|
0.86 0.83 0.87 0.84 0.83 0.77 0.83 0.77 |
|
0.83 0.88 0.84 0.84 0.86 0.81 0.79 0.77 |
|
0.87 0.89 0.90 0.83 0.86 0.87 0.91 0.84 |
|
0.76 0.80 0.83 0.77 0.81 0.87 0.84 0.89 |
|
0.82 0.84 0.90 0.83 0.79 0.91 0.84 0.91 |
|
0.78 0.82 0.86 0.77 0.77 0.84 0.89 0.91 |
|
0.00 0.25 0.50 0.75 1.00 |
|
between model agreement rate |
|
(proportion of sentence pairs) |
|
Extended Data Figure 2: Between-model agreement rate |
|
on the probability ranking of the 90 randomly sampled |
|
and paired natural sentence pairs evaluated in the exper- |
|
iment . Each cell represents the proportion of sentence pairs |
|
for which two models make congruent probability ranking |
|
(i.e., both models assign a higher probability to sentence 1, |
|
or both models assign a higher probability to sentence 2). |
|
25GPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-grammodel 2 |
|
GPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-grammodel 10.51 0.53 0.78 0.83 0.66 0.84 0.79 0.88 |
|
0.49 0.60 0.65 0.78 0.60 0.80 0.76 0.86 |
|
0.47 0.40 0.63 0.75 0.57 0.79 0.64 0.59 |
|
0.22 0.35 0.37 0.58 0.30 0.44 0.60 0.54 |
|
0.17 0.22 0.25 0.42 0.47 0.49 0.42 0.55 |
|
0.34 0.40 0.43 0.70 0.53 0.84 0.62 0.70 |
|
0.16 0.20 0.21 0.56 0.51 0.16 0.49 0.62 |
|
0.21 0.24 0.36 0.40 0.58 0.38 0.51 0.63 |
|
0.12 0.14 0.41 0.46 0.45 0.30 0.38 0.37 |
|
0.00 0.25 0.50 0.75 1.00 |
|
human choice aligned with model 1 |
|
(proportion of trials) |
|
GPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-grammodel 2 |
|
GPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-grammodel 10.55 0.61 0.79 0.84 0.85 0.86 0.76 0.92 |
|
0.45 0.36 0.60 0.89 0.81 0.96 0.94 0.95 |
|
0.39 0.64 0.49 0.88 0.89 0.95 0.98 0.95 |
|
0.21 0.40 0.51 0.71 0.73 0.86 0.78 0.90 |
|
0.16 0.11 0.12 0.29 0.51 0.70 0.72 0.91 |
|
0.15 0.19 0.11 0.27 0.49 0.79 0.65 0.71 |
|
0.14 0.04 0.05 0.14 0.30 0.21 0.32 0.55 |
|
0.24 0.06 0.02 0.22 0.28 0.35 0.68 0.67 |
|
0.08 0.05 0.05 0.10 0.09 0.29 0.45 0.33 |
|
0.00 0.25 0.50 0.75 1.00 |
|
human choice aligned with model 1 |
|
(proportion of trials)aNatural controversial sentences bSynthetic controversial sentencesExtended Data Figure 3: Pairwise model comparison of model-human consistency. For each pair of models (represented |
|
as one cell in the matrices above), the only trials considered were those in which the stimuli were either selected (a) or |
|
synthesized (b) to contrast the predictions of the two models. For these trials, the two models always made controversial |
|
predictions (i.e., one sentence is preferred by the first model and the other sentence is preferred by the second model). The |
|
matrices above depict the proportion of trials in which the binarized human judgments aligned with the row model (“model |
|
1”). For example, GPT-2 (top-row) was always more aligned (green hues) with the human choices than its rival models. In |
|
contrast, 2-gram (bottom-row) was always less aligned (purple hues) with the human choices than its rival models. |
|
GPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-grammodels assigned as mreject |
|
GPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-grammodels assigned as maccept0.21 0.24 0.43 0.23 0.29 0.36 0.17 0.22 |
|
0.21 0.16 0.29 0.51 0.29 0.23 0.18 0.39 |
|
0.17 0.20 0.27 0.34 0.27 0.40 0.27 0.20 |
|
0.14 0.18 0.27 0.44 0.23 0.25 0.33 0.20 |
|
0.07 0.07 0.11 0.16 0.17 0.13 0.08 0.22 |
|
0.03 0.01 0.07 0.04 0.17 0.06 0.16 0.04 |
|
0.05 0.03 0.05 0.03 0.03 0.01 0.04 0.06 |
|
0.04 0.03 0.02 0.08 0.05 0.04 0.06 0.04 |
|
0.01 0.02 0.01 0.02 0.04 0.01 0.07 0.04 |
|
0.00 0.25 0.50 0.75 1.00 |
|
humans choice aligned with maccept |
|
(proportion of trials) |
|
Extended Data Figure 4: Pairwise model analysis of human re- |
|
sponse for natural vs. synthetic sentence pairs. In each optimiza- |
|
tion condition, a synthetic sentence swas formed by modifying a nat- |
|
ural sentence nso the synthetic sentence would be “rejected” by one |
|
model ( mreject , columns), minimizing p(s|mreject ), and would be |
|
“accepted” by another model ( maccept , rows), satisfying the constraint |
|
p(s|maccept )≥p(n|maccept ). Each cell above summarizes model- |
|
human agreement in trials resulting from one such optimization condi- |
|
tion. The color of each cell denotes the proportion of trials in which |
|
humans judged a synthetic sentence to be more likely than its natural |
|
counterpart and hence aligned with maccept . For example, the top-right |
|
cell depicts human judgments for sentence pairs formed to minimize |
|
the probability assigned to the synthetic sentence by the simple 2-gram |
|
model while ensuring that GPT-2 would judge the synthetic sentence |
|
to be at least as likely as the natural sentence; humans favored the syn- |
|
thetic sentence in only 22 out the 100 sentence pairs in this condition. |
|
261.0 |
|
0.5 |
|
0.0 0.5 1.0 |
|
ordinal correlation between human ratings and models' |
|
sentence pair probability log-ratio (signed-rank cosine similarity)RoBERTa |
|
RoBERTa (PLL) |
|
ELECTRA |
|
ELECTRA (PLL) |
|
BERT |
|
BERT (PLL) |
|
aRandomly sampled natural-sentence pairs |
|
1.0 |
|
0.5 |
|
0.0 0.5 1.0 |
|
ordinal correlation between human ratings and models' |
|
sentence pair probability log-ratio (signed-rank cosine similarity)RoBERTa |
|
RoBERTa (PLL) |
|
ELECTRA |
|
ELECTRA (PLL) |
|
BERT |
|
BERT (PLL) |
|
bSynthetic controversial sentence pairsExtended Data Figure 5: Human consistency of bidirectional transformers: approximate log-likelihood versus pseudo- |
|
log-likelihood (PLL). Each dot in the plots above depicts the ordinal correlation between the judgments of one participant |
|
and the predictions of one model. (a)The performance of BERT, RoBERTa, and ELECTRA in predicting the human judg- |
|
ments of randomly sampled natural sentence pairs in the main experiment, using two different likelihood measures: our novel |
|
approximate likelihood method (i.e., averaging multiple conditional probability chains, see Methods) and pseudo-likelihood |
|
(PLL, summating the probability of each word given all of the other words [64]). For each model, we statistically com- |
|
pared the two likelihood measures to each other and to the noise ceiling using a two-sided Wilcoxon signed-rank test across |
|
the participants. False discovery rate was controlled at q <0.05for the 9 comparisons. When predicting human pref- |
|
erences of natural sentences, the pseudo-log-likelihood measure is at least as accurate as our proposed approximate |
|
log-likelihood measure. (b) Results from a follow-up experiment, in which we synthesized synthetic sentence pairs for each |
|
of the model pairs, pitting the two alternative likelihood measures against each other. Statistical testing was conducted in the |
|
same fashion as in panel a. These results indicate that for each of the three bidirectional language models, the approximate |
|
log-likelihood measure is considerably and significantly ( q <0.05) more human-consistent than the pseudo-likelihood mea- |
|
sure. Synthetic controversial sentence pairs uncover a dramatic failure mode of the pseudo-log-likelihood measure, |
|
which remains covert when the evaluation is limited to randomly-sampled natural sentences. See Extended Data Table |
|
2 for synthetic sentence pair examples. |
|
270 25% 50% 75% 100% |
|
human-choice prediction accuracyGPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-gram |
|
Extended Data Figure 6: Model prediction accuracy for pairs of |
|
natural and synthetic sentences, evaluating each model across |
|
all of the sentence pairs in which it was targeted to rate the syn- |
|
thetic sentence to be less probable than the natural sentence. |
|
The data binning applied here is complementary to the one used in |
|
Fig. 3b, where each model was evaluated across all of the sentence |
|
pairs in which it was targeted to rate the synthetic sentence to be |
|
at least as probable as the natural sentence. Unlike Fig. 3b, where |
|
all of the models performed poorly, here no models were found to |
|
be significantly below the lower bound on the noise ceiling; typ- |
|
ically, when a sentence was optimized to decrease its probability |
|
under any model (despite the sentence probability not decreasing |
|
under a second model), humans agreed that the sentence became |
|
less probable. |
|
sentence log probability (model 1) log probability (model 2) # human choices |
|
n: I always cover for him and make excuses. logp(n|GPT-2 ) =−36.46 log p(n|2-gram ) =−106.95 10 |
|
s: We either wish for it or ourselves do. logp(s|GPT-2 ) =−36.15 logp(s|2-gram ) =−122.28 0 |
|
n: This is why I will never understand boys. logp(n|RoBERTa ) =−46.88 log p(n|2-gram ) =−103.11 10 |
|
s: This is why I will never kiss boys. logp(s|RoBERTa ) =−46.75 logp(s|2-gram ) =−107.91 0 |
|
n: One of the ones I did required it. logp(n|ELECTRA ) =−35.97 log p(n|LSTM ) =−40.89 10 |
|
s: Many of the years I did done so. logp(s|ELECTRA ) =−35.77 logp(s|LSTM ) =−46.25 0 |
|
n: There were no guns in the Bronze Age. logp(n|BERT ) =−48.48 log p(n|ELECTRA ) =−30.40 10 |
|
s: There is rich finds from the Bronze Age. logp(s|BERT ) =−48.46 logp(s|ELECTRA ) =−44.34 0 |
|
n: You did a great job on cleaning them. logp(n|XLM ) =−40.38 log p(n|RNN) =−43.47 10 |
|
s: She did a great job at do me. logp(s|XLM ) =−39.89 logp(s|RNN) =−61.03 0 |
|
n: This logic has always seemed flawed to me. logp(n|LSTM ) =−39.77 log p(n|RNN) =−45.92 10 |
|
s: His cell has always seemed instinctively to me. logp(s|LSTM ) =−38.89 logp(s|RNN) =−62.81 0 |
|
s: Stand near the cafe and sip your coffee. logp(s|RNN) =−65.55 log p(s|ELECTRA ) =−34.46 10 |
|
n: Sit at the front and break your neck. logp(n|RNN) =−44.18 logp(n|ELECTRA ) =−34.65 0 |
|
n: Most of my jobs have been like this. logp(n|3-gram ) =−80.72 log p(n|LSTM ) =−35.07 10 |
|
s: One of my boyfriend have been like this. logp(s|3-gram ) =−80.63 logp(s|LSTM ) =−41.44 0 |
|
n: They even mentioned that I offer white flowers. logp(n|2-gram ) =−113.38 log p(n|BERT ) =−62.81 10 |
|
s: But even fancied that would logically contradictory philosophies. logp(s|2-gram ) =−113.24 logp(s|BERT ) =−117.98 0 |
|
Extended Data Table 1: Examples of pairs of synthetic and natural sentences that maximally contributed to each |
|
model’s prediction error. For each model (double row, “model 1”), the table shows results for two sentences on which the |
|
model failed severely. In each case, the failing model 1 prefers synthetic sentence s(higher log probability bolded), while the |
|
model it was pitted against (“model 2”) and all 10 human subjects presented with that sentence pair prefer natural sentence |
|
n. (When more than one sentence pair induced an equal maximal error in a model, the example included in the table was |
|
chosen at random.) |
|
28sentence pseudo-log-likelihood (PLL) approximate log probability # human choices |
|
s1: I found so many in things and called. logp(s1|BERT (PLL) ) =−55.14 log p(s1|BERT ) =−55.89 30 |
|
s2: Khrushchev schizophrenic so far |
|
disproportionately goldfish fished alone. logp(s2|BERT (PLL) ) =−22.84 logp(s2|BERT ) =−162.31 0 |
|
s1: Figures out if you are on the lead. logp(s1|BERT (PLL) ) =−38.11 log p(s1|BERT ) =−51.27 30 |
|
s2: Neighbours unsatisfactory indistinguishable |
|
misinterpreting schizophrenic on homecoming |
|
cheerleading. logp(s2|BERT (PLL) ) =−16.43 logp(s2|BERT ) =−258.91 0 |
|
s1: I just say this and not the point. logp(s1|ELECTRA (PLL) ) =−34.41 log p(s1|ELECTRA ) =−33.80 30 |
|
s2: Glastonbury reliably mobilize disenfranchised |
|
homosexuals underestimate unhealthy skeptics. logp(s2|ELECTRA (PLL) ) =−11.81 logp(s2|ELECTRA ) =−162.62 0 |
|
s1: And diplomacy is more people to the place. logp(s1|ELECTRA (PLL) ) =−62.81 log p(s1|ELECTRA ) =−47.33 30 |
|
s2: Brezhnev ingenuity disembarking Acapulco |
|
methamphetamine arthropods unaccompanied |
|
Khrushchev. logp(s2|ELECTRA (PLL) ) =−34.00 logp(s2|ELECTRA ) =−230.97 0 |
|
s1: Sometimes what looks and feels real to you. logp(s1|RoBERTa (PLL) ) =−36.58 log p(s1|RoBERTa ) =−51.61 30 |
|
s2: Buying something breathes or crawls |
|
aesthetically to decorate. logp(s2|RoBERTa (PLL) ) =−9.78 logp(s2|RoBERTa ) =−110.27 0 |
|
s1: In most other high priority packages were affected. logp(s1|RoBERTa (PLL) ) =−71.13 log p(s1|RoBERTa ) =−61.60 30 |
|
s2: Stravinsky cupboard nanny contented burglar |
|
babysitting unsupervised bathtub. logp(s2|RoBERTa (PLL) ) =−21.86 logp(s2|RoBERTa ) =−164.70 0 |
|
Extended Data Table 2: Examples of controversial synthetic-sentence pairs that maximally contributed to the pre- |
|
diction error of bidirectional transformers using pseudo-log-likelihood (PLL). For each bidirectional model, the table |
|
displays two sentence pairs on which the model failed severely when its prediction was based on pseudo-log-likelihood |
|
(PLL) estimates [64]. In each of these sentence pairs, the PLL estimate favors sentence s2(higher PLL bolded), while the |
|
approximate log-likelihood estimate and most of the human subjects presented with that sentence pair preferred sentence |
|
s1. (When more than one sentence pair induced an equal maximal error in a model, the example included in the table was |
|
chosen at random.) Sentences with long, multi-token words (e.g., “methamphetamine”) have high PLL estimates since |
|
each of their tokens is well predicted by the others tokens. And yet, the entire sentence is improbable according to |
|
human judgments and approximate log-probability estimates based on proper conditional probability chains. |
|
296 Supplementary Methods |
|
6.1 Language models |
|
N-gram models. N-gram models [68], the simplest language model class, are trained by counting the |
|
number of occurrences of all unique phrases of length N words in large text corpora. N-gram models make |
|
predictions about upcoming words by using empirical conditional probabilities in the training corpus. We |
|
tested both 2-gram and 3-gram variants. In 2-gram models, all unique two-word phrases are counted, and |
|
each upcoming word probability (probability of w2conditioned on previous word w1) is determined by |
|
dividing the count of 2-gram w1, w2by the count of unigram (word) w1. In 3-gram models, all unique three- |
|
word phrases are counted, and upcoming word probabilities (probability of w3conditioned on previous |
|
words w1andw2) are determined by dividing the count of 3-gram w1, w2, w3by the count of 2-gram |
|
w1, w2. In both such models, sentence probabilities can be computed as the product of all unidirectional |
|
word transition probabilities in a given sentence. We trained both the 2-gram and 3-gram models on a large |
|
corpus composed of text from four sources: 1. public comments from the social media website Reddit |
|
(reddit .com) acquired using the public API at pushshift .io, 2. articles from Wikipedia, 3. English |
|
books and poetry available for free at Project Gutenberg ( gutenberg .org), and 4. articles compiled in |
|
the American Local News Corpus [69]. The n-gram probability estimates were regularized by means of |
|
Kneser-Ney smoothing [70]. |
|
Recurrent neural network models. We also tested two recurrent neural network models, including a |
|
simple recurrent neural network (RNN) [1] and a more complex long short-term memory recurrent neural |
|
network (LSTM) [2]. We trained both of these models on a next word prediction task using the same corpus |
|
used to train the n-gram models. Both the RNN and LSTM had a 256-feature embedding size and a 512- |
|
feature hidden state size, and were trained over 100 independent batches of text for 50 epochs with a learning |
|
rate of .002. Both models’ training sets were tokenized into individual words and consisted of a vocabulary |
|
of94,607unique tokens. |
|
Transformer models. Similar to RNNs, transformers are designed to make predictions about sequential |
|
inputs. However, transformers do not use a recurrent architecture, and have a number of more complex |
|
architectural features. For example, unlike the fixed token embeddings in classic RNNs, transformers utilize |
|
context-dependent embeddings that vary depending on a token’s position. Most transformers also contain |
|
multiple attention heads in each layer of the model, which can help direct the model to relevant tokens in |
|
complex ways. We tested five models with varying architectures and training procedures, including BERT |
|
[3], RoBERTa [4], XLM [5], ELECTRA [6], and GPT-2 [7]. |
|
• We used the large version of BERT (bi-directonal encoder representations from transformers), con- |
|
taining 24 encoding layers, 1024 hidden units in the feedforward network element of the model, and |
|
16 attention heads. BERT is a bi-directional model trained to perform two different tasks: 1. a masked |
|
language modeling (MLM) task, in which 15 percent of tokens are replaced with a special [MASK] |
|
token and BERT must predict the masked word, and 2. next sentence prediction (NSP), in which |
|
BERT aims to predict the upcoming sentence in the training corpus given the current sentence. |
|
• RoBERTa is also a bi-directional model that uses the same architecture as BERT. However, RoBERTa |
|
was trained on exclusively the masked word prediction task (and not next sentence prediction), and |
|
used a different optimization procedure (including longer training on a larger dataset). This makes |
|
empirical comparisons between BERT and RoBERTa particularly interesting, because they differ only |
|
in training procedure and not architecture. |
|
30• XLM is a cross-lingual bi-directional model which, too, shares BERT’s original architecture. XLM is |
|
trained on three different tasks: 1. the same MLM task used in both BERT and RoBERTa, 2. a causal |
|
language modeling task where upcoming words are predicted from left to right, and 3. a translation |
|
modeling task. On this task, each training example consists of the same text in two languages, and the |
|
model performs a masked language modeling task using context from one language to predict tokens |
|
of another. Such a task can help the XLM model become robust to idiosyncrasies of one particular |
|
language that may not convey much linguistic information. |
|
• The ELECTRA model uses a training approach that involves two transformer models: a generator |
|
and a discriminator. While the generator performs a masked language modeling task similar to other |
|
transformers, the discriminator simultaneously tries to figure out which masked tokens were replaced |
|
by the generator. This task may be more efficient than pure masked token prediction, because it uses |
|
information from all input tokens rather than only the masked subset. |
|
• GPT-2, the second iteration of GPT OpenAI’s GPT model, is the only unidirectional transformer |
|
model that we tested. We used the pretrained GPT-2-xl version, with 48 encoding layers and 25 |
|
attention heads in each layer. Because GPT-2 is unidirectional it was trained only on the causal |
|
language modeling task, in which tokens are predicted from left to right. |
|
6.2 Selection of controversial natural-sentence pairs |
|
We evaluated 231,725eight-word sentences sampled from Reddit. Reddit comments were scraped from |
|
across the entire website and all unique eight-word sentences were saved. These sentences were subse- |
|
quently filtered to exclude blatant spelling errors, inappropriate language, and individual words that were |
|
not included in the corpus used to train the n-gram and recurrent neural network models in our experiment. |
|
We estimated logp(s|m)for each natural sentence sand each model mas described above. We |
|
then rank-transformed the sentence probabilities separately for each model, assigning the fractional rank |
|
r(s|m) = 0 to the least probable sentence according to model mandr(s|m) = 1 to the most probable |
|
one. This step eliminated differences between models in terms of probability calibration. |
|
Next, we aimed to filter this corpus for controversial sentences. To prune the candidate sentences, we |
|
eliminated any sentence sfor which no pair of models m1,m2held(r(s|m1)<0.5)and(r(s|m2)≥0.5), |
|
where r(s|m1)is the fractional rank assigned for sentence sby model m. This step ensured that all of the |
|
remaining sentences had a below-median probability according to one model and above-median probability |
|
according to another, for at least one pair of models. We also excluded sentences in which any word (except |
|
for prepositions) appeared more than once. After this pruning, 85,749candidate sentences remained, from |
|
which 85749 |
|
2 |
|
≈3.67×109possible sentence pairs can be formed. |
|
We aimed to select 360 controversial sentence pairs, devoting 10 sentence pairs to each of the 36 |
|
models pairs. First, we defined two 360-long integer vectors m1andm2, specifying for each of the |
|
360 yet unselected sentence pairs which model pair they contrast. We then selected 360 sentence pairs |
|
s1 |
|
1, s2 |
|
1 |
|
, |
|
s1 |
|
2, s2 |
|
2 |
|
..., |
|
s1 |
|
360, s2 |
|
360 |
|
by solving the following minimization problem: |
|
{(s1 |
|
j∗, s2 |
|
j∗)|j= 1,2, ..360}= argmin |
|
s1,s2X |
|
j |
|
r(s1 |
|
j|m1 |
|
j) +r(s2 |
|
j|m2 |
|
j) |
|
(5) |
|
subject to ∀jr(s1 |
|
j|m2 |
|
j)≥0.5 (5a) |
|
∀jr(s2 |
|
j|m1 |
|
j)≥0.5 (5b) |
|
All 720 sentences are unique. (5c) |
|
31To achieve this, we used integer linear programming (ILP) as implemented by Gurobi [71]. We repre- |
|
sented sentence allocation as a sparse binary tensor Sof dimensions 85,749 ×360×2 (sentences, trials, |
|
pair members) and the fractional sentence probabilities ranks as a matrix Rof dimensions 85,749 ×9 (sen- |
|
tences, models). This enabled us to express and solve the selection problem in Eq. 5 as a standard ILP |
|
problem: |
|
S∗= argmin |
|
SX |
|
i,jSi,j,1Ri,m1 |
|
j+Si,j,2Ri,m2 |
|
j(6) |
|
subject to Si,j,1Ri,m2 |
|
j≥0.5 (6a) |
|
Si,j,2Ri,m1 |
|
j≥0.5 (6b) |
|
∀iX |
|
j,kSi,j,k≤1(each sentence iis used only once in the experiment) (6c) |
|
∀jP |
|
iSi,j,1= 1 |
|
∀jP |
|
iSi,j,2= 1) |
|
(each trial jis allocated exactly one sentence pair) (6d) |
|
Sis binary (6e) |
|
6.3 Evaluation of model-human consistency: Correlating model log-probability |
|
ratios to human Likert ratings |
|
For every model mand experimental trial i, we evaluated the log probability ratio for the trial’s two sen- |
|
tences: |
|
LR(s1 |
|
i, s2 |
|
i|m) = logp(s2 |
|
i|m) |
|
p(s1 |
|
i|m)(7) |
|
The human Likert ratings were recoded to be symmetrical around zero, mapping the six ratings appear- |
|
ing in Extended Data Fig. 1 to (−2.5,−1.5,−0.5,+0.5,+1.5,+2.5). We then sought to correlate the model |
|
log-ratios and with the zero-centered human Likert ratings, quantifying how well the model log-ratios were |
|
associated with human sentence-likeliness judgments. To allow for an ordinal (not necessarily linear) asso- |
|
ciation between the log-ratios and Likert ratings, we rank-transformed both measures (ranking within each |
|
model or each human) while retaining the sign of the values. |
|
For each participant h: |
|
r(s1 |
|
i, s2 |
|
i|h) = sign( y0(s1 |
|
i, s2 |
|
i|h))·R y0(s1 |
|
i, s2 |
|
i|h) |
|
, (8) |
|
where y0(s1 |
|
i, s2 |
|
i|h))is the zero-centered Likert rating provided by subject hfor trial iandR(·)is rank |
|
transform using random tie-breaking. |
|
For each model m: |
|
r(s1 |
|
i, s2 |
|
i|m) = sign( LR(s1 |
|
i, s2 |
|
i|m))·R LR(s1 |
|
i, s2 |
|
i|m) |
|
, (9) |
|
A valid correlation measure of the model ranks and human ranks must be invariant to whether one |
|
sentence was presented on the left ( s1) and the other on the right ( s2), or vice versa. Changing the sentence |
|
order within a trial would flip the signs of both the log-ratio and the zero-centered Likert rating. Therefore, |
|
the required correlation measure must be invariant to such coordinated sign flips, but not to flipping the sign |
|
of just one of the measures. Since cosine similarity maintains such invariance, we introduced signed-rank |
|
32cosine similarity , an ordinal analog of cosine similarity, substituting the raw data points for signed ranks (as |
|
defined in Eq. 8-9): |
|
SCSR=P |
|
ir(s1 |
|
i, s2 |
|
i|m)r(s1 |
|
i, s2 |
|
i|h)qP |
|
ir(s1 |
|
i, s2 |
|
i|m)2qP |
|
ir(s1 |
|
i, s2 |
|
i|h)2. (10) |
|
To eliminate the noise contributed by random tie-breaking, we used a closed-form expression of the |
|
expected value of Eq. 10 over different random tie-breaking draws: |
|
E(SCSR) =P |
|
iE |
|
r(s1 |
|
i, s2 |
|
i|m) |
|
E |
|
r(s1 |
|
i, s2 |
|
i|h) |
|
pPn |
|
k=1k2pPn |
|
k=1k2=P |
|
i¯r(s1 |
|
i, s2 |
|
i|m)¯r(s1 |
|
i, s2 |
|
i|h)Pn |
|
k=1k2, (11) |
|
where ¯r(·)denotes signed rank with average-rank assigned to ties instead of random tie-breaking, and n |
|
denotes the number of evaluated sentence pairs. The expected value of the product in the numerator is |
|
equal to the product of expected values of the factors since the random tie-breaking within each factor is |
|
independent. The vector norms (the factors in the denominator) are constant since given no zero ratings, |
|
each signed-rank rating vector always includes one of each rank 1ton(where nis the number of sentence |
|
pairs considered), and the signs are eliminated by squaring. This derivation follows a classical result for |
|
Spearman’s ρ[72] (see [73], appendix C, for a modern treatment). We empirically confirmed that averaging |
|
SCSR as defined in Eq. 10 across a large number of random tie-breaking draws converges to E(SCSR)as |
|
defined in Eq. 11. This latter expression (whose computation requires no actual random tie-breaking) was |
|
used to quantify the correlation between each participant and model. |
|
For each participant, the lower bound on the noise ceiling was calculated by replacing the model-derived |
|
predictions with an across-participants average of the nine other participants’ signed-rank rating vectors. The |
|
lower bound plotted in main text Fig. 4 is an across-subject average of this estimate. An upper bound on the |
|
noise ceiling was calculated as a dot product between the participant’s expected signed-rank rating vector |
|
(¯ r/pPk2) and a normalized, across-participants average of the expected signed-rank rating vectors of all |
|
10 participants. |
|
Inference was conducted in the same fashion as that employed for the binarized judgments (Wilcoxon |
|
signed-rank tests across the 10 subject groups, controlling for false discovery rate). |
|
7 Supplementary Results |
|
7.1 Randomly sampled natural-sentence pairs fail to adjudicate among mod- |
|
els |
|
As a baseline, we created 90 pairs of natural sentence pairs by randomly sampling from a corpus of 8-word |
|
sentences appearing on Reddit. Evaluating the sentence probabilities assigned to the sentences by the dif- |
|
ferent models, we found that models tended to agree on which of the two sentences was more probable |
|
(Extended Data Fig. 2). The between-model agreement rate ranged from 75.6% of the sentence pairs for |
|
GPT-2 vs. RNN to 93.3% for GPT-2 vs. RoBERTa, with an average agreement between models of 84.5%. |
|
Main text Fig. 1a (left-hand panel) provides a detailed graphical depiction of the relationship between sen- |
|
tence probability ranks for one model pair (GPT-2 and RoBERTa). |
|
We divided these 90 pairs into 10 sets of nine sentences and presented each set to a separate group of 10 |
|
subjects. To evaluate model-human alignment, we computed the proportion of trials where the model and the |
|
participant agreed on which sentence was more probable. All of the nine language models performed above |
|
33chance (50% accuracy) in predicting the human choices for the randomly sampled natural sentence pairs |
|
(main text Fig. 1a, right-hand panel). Since we presented each group of 10 participants with a unique set of |
|
sentence pairs, we could statistically test between-model differences while accounting for both participants |
|
and sentence pairs as random factors by means of a simple two-sided Wilcoxon signed-rank test conducted |
|
across the 10 participant groups. For the set of randomly sampled natural-sentence pairs, this test yielded |
|
no significant prediction accuracy differences between the candidate models (controlling for false discovery |
|
rate for all 36 model pairs at q<.05). This result is unsurprising considering the high level of between- |
|
model agreement on the sentence probability ranking within each of these sentence pairs. |
|
To obtain an estimate of the noise ceiling [74] (i.e., the best possible prediction accuracy for this dataset), |
|
we predicted each participant’s choices by the majority vote of the nine other participants who were pre- |
|
sented with the same trials. This measurement provided a lower bound on the noise ceiling. Including the |
|
participant’s own choice in the prediction yields an upper bound, since no set of predictions can be more |
|
human-aligned on average given the between-subject variability. For the randomly sampled natural sen- |
|
tences, none of the models were found to be significantly less accurate than the lower bound on the noise |
|
ceiling (controlling the false discovery rate for all nine models at q<.05). In other words, the 900 trials |
|
of randomly sampled and paired natural sentences provided no statistical evidence that any of the language |
|
models are human-inconsistent. |
|
7.2 Comparing the accuracy of unnormalized and normalized sentence prob- |
|
ability estimates |
|
Previous studies (e.g., [13]) have found that normalizing language model sentence probability estimates by |
|
the sentences’ token counts can result in greater alignment with human acceptability judgments. While we |
|
deliberately used unnormalized sentence log probability when designing the experiment, we evaluated the |
|
prediction accuracy of each model under such normalizations through a control analysis. |
|
Rather than predicting human judgments based only on the relative log probabilities of the two sen- |
|
tences, we instead used cross-validated logistic regression to predict human judgments using a combination |
|
of unnormalized log probability differences (“LP”) and two measures from Lau and colleagues [13] that |
|
incorporate information about the token counts in each sentence. The “MeanLP” measure normalized each |
|
sentence’s log probability by its token count T, whereas the “PenLP” measure divided each sentence’s log |
|
probability by a dampened version of its token count, |
|
(5 +T)/(5 + 1)0.8. For models trained on whole |
|
words (LSTM, RNN, 3-gram, and 2-gram), we used character count instead of token count. |
|
For each language model m, we fitted a separate logistic regression to predict the individual binarized |
|
sentence choices across the entire main experiment dataset by weighing the three predictors “LP,” “MeanLP,” |
|
and “PenLP.” We did not include an intercept due to the symmetry of the prediction task (the presentation |
|
of sentences as sentence 1 or 2 was randomized). We cross-validated the logistic regression’s accuracy by |
|
leaving out one sentence pair at a time, using data from all conditions of the experiment. |
|
Taking token count into consideration led to minor improvements in the prediction accuracy of most |
|
models (an average improvement of 0.95%), but this adjustment did not change the hierarchy of the models |
|
in terms of their human consistency (Supplementary Fig. S1). We hypothesize that the greater disparities |
|
between unnormalized and normalized probability measures, as observed by Lau and colleagues [13] com- |
|
pared to those found in our study, may be attributed to their experiment involving sentences of markedly |
|
different lengths. |
|
347.3 Models differ in their sensitivity to low-level linguistic features |
|
While the controversial sentences presented in this study were synthesized without consideration for partic- |
|
ular linguistic features, we performed a post hoc analysis to explore the contribution of different features |
|
to model and human preferences (Supplementary Fig. S3). For each controversial synthetic sentence pair, |
|
we computed the average log-transformed word frequency for each sentence (extracted from the publicly |
|
available subtlex database [61]). We also computed the average pairwise correlation between semantic |
|
GloVe vector representations [75] of all eight words, based on neuroimaging research showing that there |
|
are specific neural signatures evoked by dissimilarity in semantic vectors [10, 76]. We performed paired |
|
sample t-tests across sentence pairs between the linguistic feature preferences for models vs. humans, and |
|
found that GPT-2, LSTM, RNN, 3-gram, and 2-gram models were significantly more likely (vs. humans) |
|
to prefer sentences with low GloVe correlations, while ELECTRA was significantly more likely to prefer |
|
high GloVe correlations (controlling the false discovery rate for all nine models at q<.05). For word fre- |
|
quency, the RNN, 3-gram, and 2-gram models were significantly biased (vs. humans) to prefer sentences |
|
with low-frequency words, while ELECTRA and XLM showed a significant bias for high-frequency words. |
|
These results indicate that even strong models like GPT-2 and ELECTRA can exhibit subtle misalignments |
|
with humans in their response to simple linguistic features, when evaluated on sentences synthesized to be |
|
controversial. |
|
358 Supplementary Figures |
|
0 25% 50% 75% 100% |
|
human-choice prediction accuracyGPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-gram |
|
a |
|
0 25% 50% 75% 100% |
|
human-choice prediction accuracyGPT-2 |
|
RoBERTa |
|
ELECTRA |
|
BERT |
|
XLM |
|
LSTM |
|
RNN |
|
3-gram |
|
2-gram |
|
bUnnormalized sentence probability estimates |
|
Normalized sentence probability estimates |
|
Supplementary Fig. S1: The predictivity of normalized and unnormalized log-probability measures. |
|
(a)Predicting human judgments from all conditions using only unnormalized log probability differences |
|
(equivalent to Fig. 4 in the main text, except using binarized accuracy as a dependent measure). (b)Binarized |
|
accuracy of the logistic regression optimally combining LP, MeanLP, and PenLP for each language model. |
|
Relative model performance is nearly identical in these two analyses, indicating that tokenization differences |
|
across models did not play a large confounding role in our main results. |
|
36Supplementary Fig. S2: The task instructions provided to the participants at the beginning of the experimen- |
|
tal session. |
|
37abSupplementary Fig. S3: Linguistic feature values for synthetic sentence pairs. (a) GloVe correlation |
|
values of the preferred and rejected sentence for each synthetic sentence pair. Each panel depicts preferences |
|
for both humans (red) and a specific model (black), for sentence pairs that this model was involved in |
|
synthesizing. Black sub-panel outlines indicate significant differences between the preferences of models |
|
and humans on that particular set of sentence pairs, according to a paired sample t-test (controlling for false |
|
discovery rate across all nine models at q<.05). (b)Same as (a), but for average log-transformed word |
|
frequency. |
|
389 Supplementary Tables |
|
modelaccepted sentence |
|
has more tokensequal |
|
token-countsrejected sentence |
|
has more tokens p-value |
|
GPT-2 24 13 3 <0.0001 |
|
RoBERTa 6 18 16 0.0656 |
|
ELECTRA 12 21 7 0.3593 |
|
BERT 4 8 28 <0.0001 |
|
XLM 2 16 22 <0.0001 |
|
Supplementary Table S1: Token count control analysis. For each transformer model, we considered syn- |
|
thetic controversial sentence pairs where the other targeted model was also a transformer (a total of 40 |
|
sentence pairs per model). For each such pair, we evaluated the token count of the synthetic sentence to |
|
which the model assigned a higher probability (“accepted sentence”) and the token count of the synthetic |
|
sentence to which the model assigned a lower probability (“rejected sentence”). For each model, this table |
|
presents the number of sentence pairs in which the accepted sentence had a higher token count, both sen- |
|
tences had an equal number of tokens, and the rejected sentence had a higher token count. We compared the |
|
prevalence of higher token counts in accepted and rejected sentences using a binomial test ( H0:p= 0.5) |
|
controlled for False Discovery Rate across five comparisons. |
|
GPT-2 assigned significantly more tokens to accepted sentences, whereas BERT and XLM assigned sig- |
|
nificantly more tokens to rejected sentences. For RoBeRTa and ELECTRA, no significant difference was |
|
found. Note that since the controversial sentences are driven by relative model response properties, a sig- |
|
nificant difference for a particular model does not necessarily indicate that token count biases the model’s |
|
sentence probability estimates. For example, GPT-2’s apparent preference for sentences with a greater token |
|
count might reflect biases of the alternative models pitted against GPT-2. These models might prefer shorter |
|
sentences that exhibit undetected grammatical or semantic violations over longer but felicitous sentences. |
|
Overall, these results indicate that while certain models’ probability estimates might be biased by to- |
|
kenization, lower sentence probabilities were not systematically confounded by higher token counts. |
|
39 |