arxiv_id
stringlengths 10
12
| abstract
stringlengths 459
2.2k
|
---|---|
2311.08526 | Named Entity Recognition (NER) is essential in various Natural Language Processing (NLP) applications. Traditional NER models are effective but limited to a set of predefined entity types. In contrast, Large Language Models (LLMs) can extract arbitrary entities through natural language instructions, offering greater flexibility. However, their size and cost, particularly for those accessed via APIs like ChatGPT, make them impractical in resource-limited scenarios. In this paper, we introduce a compact NER model trained to identify any type of entity. Leveraging a bidirectional transformer encoder, our model, GLiNER, facilitates parallel entity extraction, an advantage over the slow sequential token generation of LLMs. Through comprehensive testing, GLiNER demonstrate strong performance, outperforming both ChatGPT and fine-tuned LLMs in zero-shot evaluations on various NER benchmarks. |
2404.03592 | Parameter-efficient finetuning (PEFT) methods seek to adapt large neural models via updates to a small number of *weights*. However, much prior interpretability work has shown that *representations* encode rich semantic information, suggesting that editing representations might be a more powerful alternative. We pursue this hypothesis by developing a family of **Representation Finetuning (ReFT)** methods. ReFT methods operate on a frozen base model and learn task-specific interventions on hidden representations. We define a strong instance of the ReFT family, Low-rank Linear Subspace ReFT (LoReFT), and we identify an ablation of this method that trades some performance for increased efficiency. Both are drop-in replacements for existing PEFTs and learn interventions that are 15×–65× more parameter-efficient than LoRA. We showcase LoReFT on eight commonsense reasoning tasks, four arithmetic reasoning tasks, instruction-tuning, and GLUE. In all these evaluations, our ReFTs deliver the best balance of efficiency and performance, and almost always outperform state-of-the-art PEFTs. We release a generic ReFT training library publicly at https://github.com/stanfordnlp/pyreft. |
2404.03683 | Language models are rarely shown fruitful mistakes while training. They then struggle to look beyond the next token, suffering from a snowballing of errors and struggling to predict the consequence of their actions several steps ahead. In this paper, we show how language models can be taught to search by representing the process of search in language, as a flattened string — astream of search (SoS) . We propose a unified language for search that captures an array of different symbolic search strategies.
We demonstrate our approach using the simple yet difficult game of Countdown, where the goal is to combine input numbers with arithmetic operations to reach a target number.
We pretrain a transformer-based language model from scratch on a dataset of streams of search generated by heuristic solvers. We find that SoS pretraining increases search accuracy by 25% over models trained to predict only the optimal search trajectory.
We further finetune this model with two policy improvement methods: Advantage-Induced Policy Alignment (APA) and Self-Taught Reasoner (STaR) . The finetuned SoS models solve 36% of previously unsolved problems, including problems that cannot be solved by any of the heuristic solvers.
Our results indicate that language models can learn to solve problems via search, self-improve to flexibly use different search strategies, and potentially discover new ones.111Code Available Here:https://github.com/kanishkg/stream-of-search |
2311.04205 | Misunderstandings arise not only in interpersonal communication but also between humans and Large Language Models (LLMs) . Such discrepancies can make LLMs interpret seemingly unambiguous questions in unexpected ways, yielding incorrect responses.
While it is widely acknowledged that the quality of a prompt, such as a question, significantly impacts the quality of the response provided by LLMs, a systematic method for crafting questions that LLMs can better comprehend is still underdeveloped.
In this paper, we present a method named ‘Rephrase and Respond’ (RaR) , which allows LLMs to rephrase and expand questions posed by humans and provide responses in a single prompt. This approach serves as a simple yet effective prompting method for improving performance.
We also introduce a two-step variant of RaR, where a rephrasing LLM first rephrases the question and then passes the original and rephrased questions together to a different responding LLM. This facilitates the effective utilization of rephrased questions generated by one LLM with another. Our experiments demonstrate that our methods significantly improve the performance of different models across a wide range to tasks.
We further provide a comprehensive comparison between RaR and the popular Chain-of-Thought (CoT) methods, both theoretically and empirically. We show that RaR is complementary to CoT and can be combined with CoT to achieve even better performance. Our work not only contributes to enhancing LLM performance efficiently and effectively but also sheds light on a fair evaluation of LLM capabilities. Data and codes are available athttps://github.com/uclaml/Rephrase-and-Respond. |
2209.09675 | Fast Function Extraction (FFX) is a deterministic algorithm for solving symbolic regression problems. We improve the accuracy of FFX by adding parameters to the arguments of nonlinear functions. Instead of only optimizing linear parameters, we optimize these additional nonlinear parameters with separable nonlinear least squared optimization using a variable projection algorithm. Both FFX and our new algorithm is applied on the PennML benchmark suite. We show that the proposed extensions of FFX leads to higher accuracy while providing models of similar length and with only a small increase in runtime on the given data. Our results are compared to a large set of regression methods that were already published for the given benchmark suite. |
2302.03213 | On-device Deep Neural Network (DNN) inference consumes significant computing resources and development efforts. To alleviate that, we propose LUT-NN, the first system to empower inference by table lookup, to reduce inference cost. LUT-NN learns the typical features for each operator, named centroid, and precompute the results for these centroids to save in lookup tables. During inference, the results of the closest centroids with the inputs can be read directly from the table, as the approximated outputs without computations. LUT-NN integrates two major novel techniques: (1) differentiable centroid learning through backpropagation, which adapts three levels of approximation to minimize the accuracy impact by centroids; (2) table lookup inference execution, which comprehensively considers different levels of parallelism, memory access reduction, and dedicated hardware units for optimal performance. LUT-NN is evaluated on multiple real tasks, covering image and speech recognition, and nature language processing. Compared to related work, LUT-NN improves accuracy by 66% to 92%, achieving similar level with the original models. LUT-NN reduces the cost at all dimensions, including FLOPs () , model size () , latency () , memory () , and power () . |
2106.10860 | Multiplying matrices is among the most fundamental and compute-intensive operations in machine learning. Consequently, there has been significant work on efficiently approximating matrix multiplies.
We introduce a learning-based algorithm for this task that greatly outperforms existing methods.
Experiments using hundreds of matrices from diverse domains show that it often runsfaster than exact matrix products andfaster than current approximate methods. In the common case that one matrix is known ahead of time,
our method also has the interesting property that it requires zero multiply-adds.
These results suggest that a mixture of hashing, averaging, and byte shuffling—–the core operations of our method—–could be a more promising building block for machine learning than the sparsified, factorized, and/or scalar quantized matrix products that have recently been the focus of substantial research and hardware investment. |
2406.04520 | We introduceNatural Plan, a realistic planning benchmark in natural language containing 3 key tasks:Trip Planning,Meeting Planning, andCalendar Scheduling. We focus our evaluation on the planning capabilities of LLMs with full information on the task, by providing outputs from tools such as Google Flights, Google Maps, and Google Calendar as contexts to the models. This eliminates the need for a tool-use environment for evaluating LLMs on Planning. We observe thatNatural Planis a challenging benchmark for state of the art models. For example, inTrip Planning, GPT-4 and Gemini 1.5 Pro could only achieve 31.1% and 34.8% solve rate respectively. We find that model performance drops drastically as the complexity of the problem increases: all models perform below 5% when there are 10 cities, highlighting a significant gap in planning in natural language for SoTA LLMs. We also conduct extensive ablation studies onNatural Planto further shed light on the (in) effectiveness of approaches such as self-correction, few-shot generalization, and in-context planning with long-contexts on improving LLM planning. |
2407.07071 | When asked to summarize articles or answer questions given a passage, large language models (LLMs) can hallucinate details and respond with unsubstantiated answers that are inaccurate with respect to the input context.
This paper describes a simple approach for detecting suchcontextual hallucinations. We hypothesize that contextual hallucinations are related to the extent to which an LLM attends to information in the provided context versus its own generations. Based on this intuition, we propose a simple hallucination detection model whose input features are given by the ratio of attention weights on the context versus newly generated tokens (for each attention head) . We find that a linear classifier based on theselookback ratiofeatures is as effective as a richer detector that utilizes the entire hidden states of an LLM or a text-based entailment model.
The lookback ratio-based detector—Lookback Lens—is found to transfer across tasks and even models, allowing a detector that is trained on a 7B model to be applied (without retraining) to a larger 13B model.
We further apply this detector to mitigate contextual hallucinations, and find that a simple classifier-guided decoding approach is able to reduce the amount of hallucination, for example by 9.6% in the XSum summarization task.111Source code:github.com/voidism/Lookback-Lens |
2310.06762 | Aligned large language models (LLMs) demonstrate exceptional capabilities in task-solving, following instructions, and ensuring safety. However, the continual learning aspect of these aligned LLMs has been largely overlooked.
Existing continual learning benchmarks lack sufficient challenge for leading aligned LLMs, owing to both their simplicity and the models’ potential exposure during instruction tuning.
In this paper, we introduce TRACE, a novel benchmark designed to evaluate continual learning in LLMs. TRACE consists of 8 distinct datasets spanning challenging tasks including domain-specific tasks, multilingual capabilities, code generation, and mathematical reasoning.
All datasets are standardized into a unified format, allowing for effortless automatic evaluation of LLMs.
Our experiments show that after training on TRACE, aligned LLMs exhibit significant declines in both general ability and instruction-following capabilities.
For example, the accuracy of llama2-chat 13B on gsm8k dataset declined precipitously from 28.8% to 2% after training on our datasets.
This highlights the challenge of finding a suitable tradeoff between achieving performance on specific tasks while preserving the original prowess of LLMs.
Empirical findings suggest that tasks inherently equipped with reasoning paths contribute significantly to preserving certain capabilities of LLMs against potential declines.
Motivated by this, we introduce the Reasoning-augmented Continual Learning (RCL) approach. RCL integrates task-specific cues with meta-rationales, effectively reducing catastrophic forgetting in LLMs while expediting convergence on novel tasks. |
2310.15147 | The rapid development of Large Language Models (LLMs) has led to great strides in model capabilities like reasoning and long-context understanding.
However, as LLMs are able to process longer contexts, it becomes more challenging to evaluate whether they have acquired certain capabilities, since the length of text (e.g., 100K tokens) they can process far exceeds what humans can reliably assess in a reasonable duration.
In this paper, we propose using complex synthetic tasks as a proxy evaluation method, and presentS3Eval, aSynthetic,Scalable,Systematic evaluation suite for LLMs evaluation.
As a synthetic benchmark,S3Evalenables the creation of any number of evaluation examples that are theoretically invisible to LLMs, mitigating the test set contamination issue.
The synthetic nature ofS3Evalprovides users full control over the dataset, allowing them to systematically probe LLM capabilities by scaling text length and varying task difficulty across diverse scenarios.
The strong correlation betweenS3Evalperformance and scores of real-world benchmarks like Big-Bench Hard (BBH) demonstrates the soundness of usingS3Evalfor evaluation of LLMs.
The in-depth analysis also uncover additional insights, including performance drop when the answer is sparsely distributed or located in the middle context, as well as some counter-intuitive trends of model performance. Our code is available athttps://github.com/lfy79001/SQLEval. |
2406.13121 | Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
Leveraging LCLMs’ ability to natively ingest and process entire corpora of information offers numerous advantages.
It enhances user-friendliness by eliminating the need for specialized knowledge of tools, provides robust end-to-end modeling that minimizes cascading errors in complex pipelines, and allows for the application of sophisticated prompting techniques across the entire system.
To assess this paradigm shift, we introduceLOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs’ performance on in-context retrieval and reasoning.
Our findings reveal LCLMs’ surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
However, LCLMs still face challenges in areas like compositional reasoning that are required in SQL-like tasks.
Notably, prompting strategies significantly influence performance, emphasizing the need for continued research as context lengths grow.
Overall,LOFTprovides a rigorous testing ground for LCLMs, showcasing their potential to supplant existing paradigms and tackle novel tasks as model capabilities scale.111TheLOFTbenchmark is available athttps://github.com/google-deepmind/loft.Figure 1:An overview of theLOFTbenchmark, made of six tasks which measure LCLMs’ ability to do in-context retrieval, reasoning, and many-shot learning on corpora up to millions of tokens.
We compare the performance of LCLMs against specialized models (e.g.,CLIP for visual retrieval) , which often rely on complex task-specific fine-tuning or pipelining.
Unlike specialized models, we show how LCLMs can simplify various tasks through Corpus-in-Context Prompting (Section3) . |
2405.12130 | Low-rank adaptation (LoRA) is a popular parameter-efficient fine-tuning (PEFT) method for large language models (LLMs) .
In this paper, we analyze the impact of low-rank updating, as implemented in LoRA. Our findings suggest that the low-rank updating mechanism may limit the ability of LLMs to effectively learn and memorize new knowledge.
Inspired by this observation, we propose a new method called MoRA, which employs a square matrix to achieve high-rank updating while maintaining the same number of trainable parameters.
To achieve it,
we introduce the corresponding non-parameter operators to reduce the input dimension and increase the output dimension for the square matrix.
Furthermore, these operators ensure that the weight can be merged back into LLMs, which makes our method can be deployed like LoRA.
We perform a comprehensive evaluation of our method across five tasks: instruction tuning, mathematical reasoning, continual pretraining, memory and pretraining. Our method outperforms LoRA on memory-intensive tasks and achieves comparable performance on other tasks.
Our code will be available athttps://github.com/kongds/MoRA. |
2309.17452 | Large language models have made significant progress in various language tasks, yet they still struggle with complex mathematics.
In this paper, we proposeToRA, a series ofTool-integratedReasoningAgents designed to solve challenging mathematical problems by seamlessly integrating natural language reasoning with the utilization of external tools (e.g., computation libraries and symbolic solvers) , thereby amalgamating the analytical prowess of language and the computational efficiency of tools.
To trainToRA, we curate interactive tool-use trajectories on mathematical datasets, apply imitation learning on the annotations, and propose output space shaping to further refine models’ reasoning behavior.
As a result,ToRAmodels significantly outperform open-source models on 10 mathematical reasoning datasets across all scales with 13%-19% absolute improvements on average.
Notably,ToRA-7B reaches 44.6% on the competition-level dataset MATH, surpassing the best open-source model WizardMath-70B by 22% absolute.ToRA-Code-34B is also the first open-source model that achieves an accuracy exceeding 50% on MATH, which significantly outperforms GPT-4’s CoT result, and is competitive with GPT-4 solving problems with programs.
Additionally, we conduct a comprehensive analysis of the benefits and remaining challenges of tool interaction for mathematical reasoning, providing valuable insights for future research111Code and models are available athttps://github.com/microsoft/ToRA.. |
2402.18153 | Transfer learning is a topic of significant interest in recent deep learning research because it enables faster convergence and improved performance on new tasks. While the performance of transfer learning depends on the similarity of the source data to the target data, it is costly to train a model on a large number of datasets. Therefore, pretrained models are generally blindly selected with the hope that they will achieve good performance on the given task. To tackle such suboptimality of the pretrained models, we propose an efficient and adaptive transfer learning scheme through dataset-conditioned pretrained weights sampling. Specifically, we use a latent diffusion model with a variational autoencoder that can reconstruct the neural network weights, to learn the distribution of a set of pretrained weights conditioned on each dataset for transfer learning on unseen datasets. By learning the distribution of a neural network on a variety pretrained models, our approach enables adaptive sampling weights for unseen datasets achieving faster convergence and reaching competitive performance. |
2407.06677 | Is it always necessary to compute tokens from shallow to deep layers in Transformers? The continued success of vanilla Transformers and their variants suggests an undoubted “yes”. In this work, however, we attempt to break the depth-ordered convention by proposing a novel architecture dubbed mixture-of-modules (MoM) , which is motivated by an intuition that any layer, regardless of its position, can be used to compute a token as long as it possesses the needed processing capabilities.
The construction of MoM starts from a finite set of modules defined by multi-head attention and feed-forward networks, each distinguished by its unique parameterization. Two routers then iteratively select attention modules and feed-forward modules from the set to process a token. The selection dynamically expands the computation graph in the forward pass of the token, culminating in an assembly of modules.
We show that MoM provides not only a unified framework for Transformers and their numerous variants but also a flexible and learnable approach for reducing redundancy in Transformer parameterization.
We pre-train various MoMs using OpenWebText. Empirical results demonstrate that MoMs, of different parameter counts, consistently outperform vanilla transformers on both GLUE and XSUM benchmarks.
More interestingly, with a fixed parameter budget, MoM-large enables an over 38% increase in depth for computation graphs compared to GPT-2-large, resulting in absolute gains of 1.4 on GLUE and 1 on XSUM. On the other hand, MoM-large also enables an over 60% reduction in depth while involving more modules per layer, yielding a 16% reduction in TFLOPs and a 43% decrease in memory usage compared to GPT-2-large, while maintaining comparable performance.111Code is available athttps://github.com/gzhch/MoM |
2407.06483 | Test-time interventions for language models can enhance factual accuracy, mitigate harmful outputs, and improve model efficiency without costly retraining.
But despite a flood of new methods, different types of interventions are largely developing independently.
In practice, multiple interventions must be applied sequentially to the same model, yet we lack standardized ways to study how interventions interact.
We fill this gap by introducingcomposable interventions, a framework to study the effects of using multiple interventions on the same language models, featuring new metrics and a unified codebase.
Using our framework, we conduct extensive experiments and compose popular methods from three emerging intervention categories—knowledge editing,model compression, andmachine unlearning.
Our results from 310 different compositions uncover meaningful interactions: compression hinders editing and unlearning, composing interventions hinges on their order of application, and popular general-purpose metrics are inadequate for assessing composability.
Taken together, our findings showcase clear gaps in composability, suggesting a need for new multi-objective interventions.111All of our code is public:github.com/hartvigsen-group/composable-interventions††Correspondence:{arinbjorn,hartvigsen}@virginia.edu,[email protected] |
2305.14627 | Large language models (LLMs) have emerged as a widely-used tool for information seeking, but their generated outputs are prone to hallucination. In this work, our aim is to allow LLMs to generate text withcitations,
improving their factual correctness and verifiability.
Existing work mainly relies on commercial search engines and human evaluation, making it challenging to reproduce and compare different modeling approaches.
We proposeALCE, the first benchmark forAutomaticLLMs’CitationEvaluation.
ALCE collects a diverse set of questions and retrieval corpora
and requires building end-to-end systems to retrieve supporting evidence and generate answers with citations.
We develop automatic metrics along three dimensions—fluency, correctness, and citation quality—and
demonstrate their strong correlation with human judgements.
Our experiments with state-of-the-art LLMs and novel prompting strategies show that current systems have considerable room for improvement—For example, on the ELI5 dataset,
even the best models lack complete citation support 50% of the time.
Our analyses further highlight promising future directions, including
developing better retrievers,
advancing long-context LLMs,
and improving the ability to synthesize information from multiple sources.111Our code and data are available athttps://github.com/princeton-nlp/ALCE. |
2407.06023 | Large language models (LLMs) can spend extra compute during inference to generate intermediate thoughts, which helps to produce better final responses.
Since Chain-of-Thought,
many suchSystem 2techniques have been proposed such as Rephrase and Respond, System 2 Attentionand Branch-Solve-Merge. In this work we investigate self-supervised methods to “compile” (distill) higher quality outputs from System 2 techniques back into LLM generationswithoutintermediate reasoning token sequences, as this reasoning has been distilled intoSystem 1. We show that several such techniques can be successfully distilled, resulting in improved results compared to the original System 1 performance, and with less inference cost than System 2. We posit that System 2 distillation will be an important feature of future continually learning AI systems, enabling them to focus System 2 capabilities
on the reasoning tasks that they cannot yet do well. |
2402.10890 | In this paper, we examine how large language models (LLMs) solve multi-step problems under a language agent framework with three components: a generator, a discriminator, and a planning method.
We investigate the practical utility of two advanced planning methods, iterative correction and tree search.
We present a comprehensive analysis of how discrimination accuracy affects the overall performance of agents when using these two methods or a simpler method, re-ranking.
Experiments on two tasks, text-to-SQL parsing and mathematical reasoning, show that:
(1) advanced planning methods demand discriminators with at least 90% accuracy to achieve significant improvements over re-ranking;
(2) current LLMs’ discrimination abilities have not met the needs of advanced planning methods to achieve such improvements;
(3) with LLM-based discriminators, advanced planning methods may not adequately balance accuracy and efficiency. For example, compared to the other two methods, tree search is at least 10–20 times slower but leads to negligible performance gains, which hinders its real-world applications.111Code and data will be released athttps://github.com/OSU-NLP-Group/llm-planning-eval. |
2406.04093 | Sparse autoencoders provide a promising unsupervised approach for extracting interpretable features from a language model by reconstructing activations from a sparse bottleneck layer. Since language models learn many concepts, autoencoders need to be very large to recover all relevant features. However, studying the properties of autoencoder scaling is difficult due to the need to balance reconstruction and sparsity objectives and the presence of dead latents. We propose using k-sparse autoencodersto directly control sparsity, simplifying tuning and improving the reconstruction-sparsity frontier. Additionally, we find modifications that result in few dead latents, even at the largest scales we tried. Using these techniques, we find clean scaling laws with respect to autoencoder size and sparsity. We also introduce several new metrics for evaluating feature quality based on the recovery of hypothesized features, the explainability of activation patterns, and the sparsity of downstream effects. These metrics all generally improve with autoencoder size. To demonstrate the scalability of our approach, we train a 16 million latent autoencoder on GPT-4 activations for 40 billion tokens. We releasecode and autoencoders for open-source models, as well as avisualizer.111Our open source code can be found athttps://github.com/openai/sparse_autoencoderand our visualizer is hosted athttps://openaipublic.blob.core.windows.net/sparse-autoencoder/sae-viewer/index.html |
2407.03211 | Quantization techniques are widely used to improve inference speed and deployment of large language models. While a wide body of work examines the impact of quantized LLMs on English tasks, none have examined the effect of quantization across languages. We conduct a thorough analysis of quantized multilingual LLMs, focusing on their performance across languages and at varying scales. We use automatic benchmarks, LLM-as-a-Judge methods, and human evaluation, finding that (1) harmful effects of quantization are apparent in human evaluation, and automatic metrics severely underestimate the detriment: a 1.7% average drop in Japanese across automatic tasks corresponds to a 16.0% drop reported by human evaluators on realistic prompts; (2) languages are disparately affected by quantization, with non-Latin script languages impacted worst; and (3) challenging tasks such as mathematical reasoning degrade fastest. As the ability to serve low-compute models is critical for wide global adoption of NLP technologies, our results urge consideration of multilingual performance as a key evaluation criterion for efficient models. |
2407.03227 | We focus on Text-to-SQL semantic parsing from the perspective of Large Language Models. Motivated by challenges related to the size of commercial database schemata and the deployability of business intelligence solutions, we propose an approach that dynamically retrieves input database information and uses abstract syntax trees to select few-shot examples for in-context learning. Furthermore, we investigate the extent to which an in-parallel semantic parser can be leveraged for generatingapproximatedversions of the expected SQL queries, to support our retrieval. We take this approach to the extreme—we adapt a model consisting of less thanM parameters, to act as an extremely efficient approximator, enhancing it with the ability to process schemata in a parallelised manner. We apply our approach to monolingual and cross-lingual benchmarks for semantic parsing, showing improvements over state-of-the-art baselines. Comprehensive experiments highlight the contribution of modules involved in this retrieval-augmented generation setting, revealing interesting directions for future work. |
2407.04528 | Parameter-Efficient Fine-Tuning (PEFT) and Retrieval-Augmented Generation (RAG) have become popular methods for adapting large language models while minimizing compute requirements. In this paper, we apply PEFT methods (P-tuning, Adapters, and LoRA) to a modified Retrieval-Enhanced Transformer (RETRO) and a baseline GPT model across several sizes, ranging from 823 million to 48 billion parameters. We show that RETRO models outperform GPT models in zero-shot settings due to their unique pre-training process but GPT models have higher performance potential with PEFT. Additionally, our study indicates that 8B parameter models strike an optimal balance between cost and performance and P-tuning lags behind other PEFT techniques. We further provide a comparative analysis of between applying PEFT to an Instruction-tuned RETRO model and base RETRO model. This work presents the first comprehensive comparison of various PEFT methods integrated with RAG, applied to both GPT and RETRO models, highlighting their relative performance. |
2407.04153 | The feedforward (FFW) layers in standard transformer architectures incur a linear increase in computational costs and activation memory as the hidden layer width grows. Sparse mixture-of-experts (MoE) architectures have emerged as a viable approach to address this issue by decoupling model size from computational cost. The recent discovery of the fine-grained MoE scaling law shows that higher granularity leads to better performance. However, existing MoE models are limited to a small number of experts due to computational and optimization challenges. This paper introduces PEER (parameter efficient expert retrieval) , a novel layer design that utilizes the product key technique for sparse retrieval from a vast pool of tiny experts (over a million) . Experiments on language modeling tasks demonstrate that PEER layers outperform dense FFWs and coarse-grained MoEs in terms of performance-compute trade-off. By enabling efficient utilization of a massive number of experts, PEER unlocks the potential for further scaling of transformer models while maintaining computational efficiency. |
2407.04078 | Large language models (LLMs) have made impressive progress in handling simple math problems, yet they still struggle with more challenging and complex mathematical tasks. In this paper, we introduce a series of LLMs that employs the Decomposition of thought with code assistance and self-correction for mathematical reasoning, dubbed as **DotaMath**. DotaMath models tackle complex mathematical tasks by decomposing them into simpler logical subtasks, leveraging code to solve these subtasks, obtaining fine-grained feedback from the code interpreter, and engaging in self-reflection and correction. By annotating diverse interactive tool-use trajectories and employing query evolution on GSM8K and MATH datasets, we generate an instruction fine-tuning dataset called DotaMathQA with 574K query-response pairs. We train a series of base LLMs using imitation learning on DotaMathQA, resulting in DotaMath models that achieve the remarkable performance compared to open-source LLMs across various in-domain and out-of-domain benchmarks. Notably, DotaMath-deepseek-7B showcases an outstanding performance of 64.8% on the competitive MATH dataset and 86.7% on GSM8K. Besides, DotaMath-deepseek-7B maintains strong competitiveness on a series of in-domain and out-of-domain benchmarks (Avg. 80.1%). Looking forward, we anticipate that the DotaMath paradigm will open new pathways for addressing intricate mathematical problems. Our code is publicly available at https://github.com/ChengpengLi1003/ DotaMath. |
2407.04620 | Self-attention performs well in long context but has quadratic complexity.
Existing RNN layers have linear complexity, but their performance in long context is limited by the expressive power of their hidden state.
We propose a new class of sequence modeling layers with linear complexity and an expressive hidden state.
The key idea is to make the hidden state a machine learning model itself, and the update rule a step of self-supervised learning.
Since the hidden state is updated by training even on test sequences, our layers are calledTest-Time Training (TTT) layers.
We consider two instantiations: TTT-Linear and TTT-MLP, whose hidden state is a linear model and a two-layer MLP respectively.
We evaluate our instantiations at the scale of 125M to 1.3B parameters, comparing with a strong Transformer and Mamba, a modern RNN.
Both TTT-Linear and TTT-MLP match or exceed the baselines.
Similar to Transformer, they can
keep reducing perplexity by conditioning on more tokens, while Mamba cannot after 16k context.
With preliminary systems optimization, TTT-Linear is already faster than Transformer at 8k context and matches Mamba in wall-clock time.
TTT-MLP still faces challenges in memory I/O, but shows larger potential in long context, pointing to a promising direction for future research. |
2405.07551 | The tool-use Large Language Models (LLMs) that integrate with external Python interpreters have significantly enhanced mathematical reasoning capabilities for open-source LLMs, while tool-free methods chose another track: augmenting math reasoning data. However, a great method to integrate the above two research paths and combine their advantages remains to be explored. In this work, we firstly include new math questions via multiperspective data augmenting methods and then synthesize **code**-nested solutions to them. The open LLMs (i.e., Llama-2) are finetuned on the augmented dataset to get the resulting models, **MuMath-Code** (µ-Math-Code). During the inference phase, our MuMath-Code generates code and interacts with the external python interpreter to get the execution results. Therefore, MuMath-Code leverages the advantages of both the external tool and data augmentation. To fully leverage the advantages of our augmented data, we propose a two-stage training strategy: In Stage-1, we finetune Llama2 on pure CoT data to get an intermediate model, which then is trained on the code-nested data in Stage-2 to get the resulting MuMathCode. Our MuMath-Code-7B achieves 83.8 on GSM8K and 52.4 on MATH, while MuMathCode-70B model achieves new state-of-the-art performance among open methods—achieving 90.7% on GSM8K and 55.1% on MATH. Extensive experiments validate the combination of tool use and data augmentation, as well as our two-stage training strategy. We release the proposed dataset along with the associated code for public use. |
2402.14905 | This paper addresses the growing need for efficient large language models (LLMs) on mobile devices, driven by increasing cloud costs and latency concerns. We focus on designing top-quality LLMs with fewer than a billion parameters, a practical choice for mobile deployment.
Contrary to prevailing belief emphasizing the pivotal role of data and parameter quantity in determining model quality, our investigation underscores the significance of model architecture for sub-billion scale LLMs. Leveraging deep and thin architectures, coupled with embedding sharing and grouped-query attention mechanisms, we establish a strong baseline network denoted asMobileLLM, which attains a remarkable 2.7%/4.3% accuracy boost over preceding 125M/350M state-of-the-art models. Additionally, we propose an immediate block-wise weight sharing approach with no increase in model size and only marginal latency overhead. The resultant models, denoted asMobileLLM-LS, demonstrate a further accuracy enhancement of 0.7%/0.8% thanMobileLLM125M/350M.
Moreover,MobileLLMmodel family shows significant improvements compared to previous sub-billion models on chat benchmarks, and demonstrates close correctness to LLaMA-v2 7B in API calling tasks, highlighting the capability of small models for common on-device use cases. |
2406.17711 | Data curation is an essential component of large-scale pretraining.
In this work, we demonstrate that jointly
selectingbatchesof data is more effective for learning than selecting examples independently. Multimodal contrastive objectives expose the dependencies between data and thus naturally yield criteria for measuring thejoint learnabilityof a batch. We derive a simple and tractable algorithm for selecting such batches, which significantly accelerate training beyond individually-prioritized data points.
As performance improves by selecting from larger super-batches, we also leverage recent advances in model approximation to reduce the associated computational overhead. As a result, our approach—multimodal contrastive learning with joint example selection (JEST) —surpasses state-of-the-art models with up to 13fewer iterations and 10less computation. Essential to the performance of JEST is the ability to steer the data selection process towards the distribution of smaller, well-curated datasets via pretrained reference models, exposing the level of data curation as a new dimension for neural scaling laws. |
2407.01178 | The training and inference of large language models (LLMs) are together a costly process that transports knowledge from raw data to meaningful computation.
Inspired by the memory hierarchy of the human brain, we reduce this cost by equipping LLMs with explicit memory, a memory format cheaper than model parameters and text retrieval-augmented generation (RAG) .
Conceptually, with most of its knowledge externalized to explicit memories, the LLM can enjoy a smaller parameter size, training cost, and inference cost, all proportional to the amount of remaining “abstract knowledge”.
As a preliminary proof of concept, we train from scratch a 2.4B LLM, which achieves better performance than much larger LLMs as well as RAG models, and maintains higher decoding speed than RAG.
The model is named Memory3, since explicit memory is the third form of memory in LLMs after implicit memory (model parameters) and working memory (context key-values) .
We introduce a memory circuitry theory to support the externalization of knowledge, and present novel techniques including a memory sparsification mechanism that makes storage tractable and a two-stage pretraining scheme that facilitates memory formation. |
2404.11018 | Large language models (LLMs) excel at few-shot in-context learning (ICL) – learning from a few examples provided in context at inference, without any weight updates.
Newly expanded context windows allow us to investigate ICL with hundreds or thousands of examples – the many-shot regime.
Going from few-shot to many-shot, we observe significant performance gains across a wide variety of generative and discriminative tasks.
While promising, many-shot ICL can be bottlenecked by the available amount of human-generated outputs.
To mitigate this limitation, we explore two new settings: “Reinforced ICL” and “Unsupervised ICL”.
Reinforced ICL uses model-generated chain-of-thought rationales in place of human rationales.
Unsupervised ICL removes rationales from the prompt altogether, and prompts the model only with domain-specific inputs.
We find that both Reinforced and Unsupervised ICL can be quite effective in the many-shot regime, particularly on complex reasoning tasks. Finally, we demonstrate that, unlike few-shot learning, many-shot learning is effective at overriding pretraining biases and can learn high-dimensional functions with numerical inputs. Our analysis also reveals the limitations of next-token prediction loss as an indicator of downstream performance. |
2407.01392 | This paper presents Diffusion Forcing, a new training paradigm where a diffusion model is trained to denoise a set of tokens withindependentper-token noise levels.
We apply Diffusion Forcing to sequence generative modeling by training a causal next-token prediction model to generate one or several future tokens without fully diffusing past ones. Our approach is shown to combine the strengths of next-token prediction models, such as variable-length generation, with the strengths of full-sequence diffusion models, such as the ability to guide sampling to desirable trajectories. Our method offers a range of additional capabilities, such as (1) rolling-out sequences of continuous tokens, such as video, with lengths past the training horizon, where baselines diverge and (2) new sampling and guiding schemes that uniquely profit from Diffusion Forcing’s variable-horizon and causal architecture, and which lead to marked performance gains in decision-making and planning tasks. In addition to its empirical success, our method is proven to optimize a variational lower bound on the likelihoods of all subsequences of tokens drawn from the true joint distribution. Project website:https://boyuan.space/diffusion-forcing |
2407.03320 | We present InternLM-XComposer-2.5 (IXC-2.5) , a versatile large-vision language model that supports long-contextual input and output.IXC-2.5 excels in various text-image comprehension and composition applications, achieving GPT-4V level capabilities with merely 7B LLM backend.Trained with 24K interleaved image-text contexts, it can seamlessly extend to 96K long contexts via RoPE extrapolation. This long-context capability allows IXC-2.5 to excel in tasks requiring extensive input and output contexts. Compared to its previous 2.0 version, InternLM-XComposer-2.5 features three major upgrades in vision-language comprehension: (1) Ultra-High Resolution Understanding, (2) Fine-Grained Video Understanding, and (3) Multi-Turn Multi-Image Dialogue. In addition to comprehension, IXC-2.5 extends to two compelling applications using extra LoRA parameters for text-image composition: (1) Crafting Webpages and (2) Composing High-Quality Text-Image Articles. IXC-2.5 has been evaluated on 28 benchmarks, outperforming existing open-source state-of-the-art models on 16 benchmarks. It also surpasses or competes closely with GPT-4V and Gemini Pro on 16 key tasks. The InternLM-XComposer-2.5 is publicly available athttps://github.com/InternLM/InternLM-XComposer. |
2206.15448 | Deep learning has excelled on complex pattern recognition tasks such as image classification and object recognition.
However, it struggles with tasks requiring nontrivial reasoning, such as algorithmic computation.
Humans are able to solve such tasks through iterative reasoning – spending more time thinking about harder tasks.
Most existing neural networks, however, exhibit a fixed computational budget controlled by the neural network architecture, preventing additional computational processing on harder tasks.
In this work, we present a new framework for iterative reasoning with neural networks.
We train a neural network to parameterize an energy landscape over all outputs, and implement each step of the iterative reasoning as an energy minimization step to find a minimal energy solution.
By formulating reasoning as an energy minimization problem, for harder problems that lead to more complex energy landscapes, we may then adjust our underlying computational budget by running a more complex optimization procedure.
We empirically illustrate that our iterative reasoning approach can solve more accurate and generalizable algorithmic reasoning tasks in both graph and continuous domains.
Finally, we illustrate that our approach can recursively solve algorithmic problems requiring nested reasoning. Code and additional information is available athttps://energy-based-model.github.io/iterative-reasoning-as-energy-minimization/. |
2312.03729 | Neural language models (LMs) can be used to evaluate the truth of factual statements in two ways: they can be eitherqueriedfor statement probabilities, orprobedfor internal representations of truthfulness.
Past work has found that these two procedures sometimes disagree, and that probes tend to be more accurate than LM outputs. This has led some researchers to conclude that LMs “lie” or otherwise encode non-cooperative communicative intents.
Is this an accurate description of today’s LMs, or can query–probe disagreement arise in other ways? We identify three different classes of disagreement, which we termconfabulation,deception, andheterogeneity.
In many cases, the superiority of probes is simply attributable to better calibration on uncertain answers rather than a greater fraction of correct, high-confidence answers.
In some cases, queries and probes perform better on different subsets of inputs, and accuracy can further be improved by ensembling the two.111Code atgithub.com/lingo-mit/lm-truthfulness. |
2306.03341 | We introduce Inference-Time Intervention (ITI) , a technique designed to enhance the “truthfulness” of large language models (LLMs) . ITI operates by shifting model activations during inference, following a set of directions across a limited number of attention heads. This intervention significantly improves the performance of LLaMA models on the TruthfulQA benchmark. On an instruction-finetuned LLaMA called Alpaca, ITI improves its truthfulness fromto. We identify a trade-off between truthfulness and helpfulness and demonstrate how to balance it by tuning the intervention strength. ITI is minimally invasive and computationally inexpensive. Moreover, the technique is data efficient: while approaches like RLHF require extensive annotations, ITI locates truthful directions using only few hundred examples. Our findings suggest that LLMs may have an internal representation of the likelihood of something being true, even as they produce falsehoods on the surface. Code:https://github.com/likenneth/honest_llama. |
2212.03827 | Existing techniques for training language models can be misaligned with the truth:
if we train models with imitation learning, they may reproduce errors that humans make; if we train them to generate text that humans rate highly, they may output errors that human evaluators can’t detect.
We propose circumventing this issue by directly finding latent knowledge inside the internal activations of a language model in a purely unsupervised way.
Specifically, we introduce a method for accurately answering yes-no questions given only unlabeled model activations.
It works by finding a direction in activation space that satisfies logical consistency properties, such as that a statement and its negation have opposite truth values.
We show that despite using no supervision and no model outputs, our method can recover diverse knowledge represented in large language models: across 6 models and 10 question-answering datasets, it outperforms zero-shot accuracy by 4% on average.
We also find that it cuts prompt sensitivity in half and continues to maintain high accuracy even when models are prompted to generate incorrect answers.
Our results provide an initial step toward discovering what language models know, distinct from what they say, even when we don’t have access to explicit ground truth labels. |
2406.20094 | We propose a novel persona-driven data synthesis methodology that leverages various perspectives within a large language model (LLM) to create diverse synthetic data. To fully exploit this methodology at scale, we introduce Persona Hub – a collection of 1 billion diverse personas automatically curated from web data. These 1 billion personas (13% of the world’s total population) , acting as distributed carriers of world knowledge, can tap into almost every perspective encapsulated within the LLM, thereby facilitating the creation of diverse synthetic data at scale for various scenarios. By showcasing Persona Hub’s use cases in synthesizing high-quality mathematical and logical reasoning problems, instructions (i.e., user prompts) , knowledge-rich texts, game NPCs and tools (functions) at scale, we demonstrate persona-driven data synthesis is versatile, scalable, flexible, and easy to use, potentially driving a paradigm shift in synthetic data creation and applications in practice, which may have a profound impact on LLM research and development. DISCLAIMER:Persona Hubcan facilitate synthetic data creation at a billion-scale to simulate diverse inputs (i.e., use cases) from a wide variety of real-world users. If this data is used as input to query a target LLM to obtain its outputs at scale, there isa high riskthat the LLM’s knowledge, intelligence and capabilities will be dumped and easily replicated, thereby challenging the leading position of the most powerful LLMs (e.g., our approach allows a 7B LLM to achieve 65% on MATH, matching the performance ofgpt-4-turbo-preview) . This tech report isfor research purposes only. It is crucial to avoid misuse and ensure ethical and responsible application. We discuss its broad impact and potential concerns in detail in Section5. |